Stitch together Nanopore tiled amplicon data without polishing a reference

Related tags

Data AnalysisLilo
Overview

logo_dark_white

Stitch together Nanopore tiled amplicon data using a reference guided approach

Tiled amplicon data, like those produced from primers designed with primal scheme, are typically assembled using methods that involve aligning them to a reference and polishing the reference into a sequence that represents the reads. This works very well for obtaining a genome with SNPs and small indels representative of the reads. However in cases where the reads cannot be mapped well to the reference (e.g. genomes containing hypervariable regions between primers) or in cases where large structrual variants are expected this method may fail as polishing tools expect the reference to originate from the reads.

Lilo uses a reference only to assign reads to the amplicon they originated from and to order and orient the polished amplicons, no reference sequence is incorporated into the final assembly. Once assigned to an amplicon, a read with high average base quality of roughly median length for that amplicon is selected as a reference and polished with up to 300x coverage three times with medaka. The polished amplicons have primers removed with porechop (fork: https://github.com/sclamons/Porechop-1) and are then assembled with scaffold_builder.

Lilo has been tested on SARS-CoV-2 with artic V3 primers. It has also been tested on 7kb and 4kb amplicons with ~100-1000bp overlaps for ASFV, PRRSV-1 and PRRSV-2, schemes for which will be made available in the near future.

Requirments not covered by conda

Install Conda :)
Install this fork of porechop and make sure it is in your path: https://github.com/sclamons/Porechop-1

Installation

git clone https://github.com/amandawarr/Lilo  
cd Lilo  
conda env create --file LILO.yaml  
conda env create --file scaffold_builder.yaml

Usage

Lilo assumes your reads are in a folder called raw/ and have the suffix .fastq.gz. Multiple samples can be processed at the same time.
Lilo requires a config file detailing the location of a reference, a primer scheme (in the form of a primal scheme style bed file), and a primers.csv file (described below).

conda activate LILO
snakemake -k -s /path/to/LILO --configfile /path/to/config.file --cores N

It is recommended to run with -k so that one sample with insufficient coverage will not stop the other jobs completing.

Input specifications

  • config.file: an example config file has been provided.
  • Primer scheme: As output by primal scheme, with alt primers removed. Bed file of primer alignment locations. Columns: reference name, start, end, primer name, pool (must end with 1 or 2).
  • Primers.csv: Comma delimited, includes alt primers, with header line. Columns: amplicon_name, F_primer_name, F_primer_sequence, R_primer_name, R_primer_sequence. If there are a lot of degenerate bases in any of the primers it is recommended to expand these, the script expand.py will expand the described csv into a longer csv with IUPAC codes expanded.
  • reference.fasta Same reference used to make the scheme file.

Output

Lilo uses the names from raw/ to name the output file. For a file named "sample.fastq.gz", the final assembly will be named "sample_Scaffold.fasta", and files produced during the pipeline will be in a folder called "sample". The output will contain amplicons that had at least 40X full length coverage. Missing amplicons will be represented by Ns. Any ambiguity at overlaps will be indicated with IUPAC codes.

Note

  • Use of the wrong fork for porechop will cause the pipeline to fail.
  • Lilo is a work in progress and has been tested on a limited number of references, amplicon sizes, and overlap sizes, I recommend checking the results carefully for each new scheme.
  • The pipeline currently assumes that any structural variants are contained between the primers of an amplicon and do not change the length of the amplicon by more than 5%. If alt amplicons produce a product of a different length to the original amplicon they may not be allocated to their amplicon. Editing it to work better with alt amplicons is on my to do list.
  • Should not be used with reads produced with rapid kits, the pipeline assumes the reads are the length of the amplicons.
  • Do let me know if it destroys any cities or steals everyone's left shoe.
You might also like...
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

PostQF Copyright © 2022 Ralph Seichter PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the ma

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

NumPy and Pandas interface to Big Data
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Comments
  • Error in rule reporechop:

    Error in rule reporechop:

    Hello, While running the sample dataset, I have encoutered the following error messages. I have made such that prochop is installed correctly and in the path.

    Any help is greatly appreciated.

    Error in rule reporechop: jobid: 2 output: FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa shell: porechop --adapter_threshold 72 --end_threshold 70 --end_size 30 --extra_end_trim 5 --min_trim_size 3 -f ASFV.primers.csv -i FAT94769_pass_barcode02_66883b35_0/polished_clipped_amplicons.fa --threads 8 --no_split -o FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)

    opened by tboonf 1
  • Error while running LILO

    Error while running LILO

    Dear, I get the following error while running LILO. Any idea what could be the problem?

    /bin/bash: /home/minion/anaconda3/envs/LILO/etc/profile.d/conda.sh: No such file or directory
    
    CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
    To initialize your shell, run
    
        $ conda init <SHELL_NAME>
    
    Currently supported shells are:
      - bash
      - fish
      - tcsh
      - xonsh
      - zsh
      - powershell
    
    See 'conda init --help' for more information and options.
    
    IMPORTANT: You may need to close and restart your shell after running 'conda init'.
    
    
    /bin/bash: line 2: scaffold_builder.py: command not found
    sed: can't read reads_24h_Scaffold.fasta: No such file or directory
    [Wed Aug 10 11:12:28 2022]
    Error in rule scaffold:
        jobid: 1
        output: reads_24h_Scaffold.fasta
        shell:
            source $CONDA_PREFIX/etc/profile.d/conda.sh
                    conda activate scaffold_builder
                    scaffold_builder.py -i 75 -t 3693 -g 80000 -r /home/minion/lilo-test/ASFV.reference.fasta -q reads_24h/polished_trimmed.fa -p reads_24h
                    sed -i '1 s/^.*$/>reads_24h_Lilo_scaffold/' reads_24h_Scaffold.fasta
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
    
    Job failed, going on with independent jobs.
    Exiting because a job execution failed. Look above for error message
    Complete log: /home/minion/lilo-test/.snakemake/log/2022-08-10T111227.425486.snakemake.log
    

    Kind regards, Elisabeth

    opened by el-mat 1
  • LILO with SLURM

    LILO with SLURM

    Hi there,

    I'm trying to run LILO on a SLURM HPC and I'm not sure what the errors are related to. Do you have an idea? It seems really environment depended, but maybe you stumbled across something similar.

    Call:

    snakemake -k -s [...]/tools/Lilo/LILO --configfile $CONFIG --profile [...]/tools/config-snippets/snake-cookies/slurm
    

    Log:

    [...]
    MissingOutputException in line 84 of [...]/tools/Lilo/LILO:
    Job Missing files after 30 seconds:
    FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
    This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
    Job id: 133673 completed successfully, but some output files are missing. 133673
    Trying to restart job 133673.
    [...]
    Error in rule assign:
        jobid: 133673
        output: FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
        shell:
            bedtools intersect -F 0.9 -wa -wb -bed -abam FAR95540_pass_unclassified_7f618209_73/alignments/reads_to_ref.bam -b amplicons.bed  | grep amplicon51 - | awk '{print $4}' - | seqtk subseq porechop/FAR95540_pass_unclassified_7f618209_73.fastq.gz - > FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
        cluster_jobid: 210115
    
    Error executing rule assign on cluster (jobid: 133673, external: 210115, jobscript: [...]/.snakemake/tmp.cssfeg5e/snakejob.assign.133673.sh). For error details see the cluster log and the log files of the involved rule(s).
    [...]
    Traceback (most recent call last):
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/__init__.py", line 701, in snakemake
        success = workflow.execute(
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/workflow.py", line 1077, in execute
        success = self.scheduler.schedule()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 441, in schedule
        self._error_jobs()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 557, in _error_jobs
        self._handle_error(job)
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 615, in _handle_error
        self.running.remove(job)
    KeyError: assign
    

    I set --latency-wait 90 it again breaks after some time at a assign rule and a KeyError: read_select from the snakemake scheduler.

    Let me know which input/config files might be interesting to solve this. :)

    opened by MarieLataretu 7
Releases(v0.2)
Owner
Amanda Warr
Amanda Warr
Churn prediction with PySpark

It is expected to develop a machine learning model that can predict customers who will leave the company.

3 Aug 13, 2021
Multiple Pairwise Comparisons (Post Hoc) Tests in Python

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data anal

Maksim Terpilowski 264 Dec 30, 2022
Finding project directories in Python (data science) projects, just like there R rprojroot and here packages

Find relative paths from a project root directory Finding project directories in Python (data science) projects, just like there R here and rprojroot

Daniel Chen 102 Nov 16, 2022
Full ELT process on GCP environment.

Rent Houses Germany - GCP Pipeline Project: The goal of the project is to extract data about house rentals in Germany, store, process and analyze it u

Felipe Demenech Vasconcelos 2 Jan 20, 2022
Developed for analyzing the covariance for OrcVIO

about This repo is developed for analyzing the covariance for OrcVIO environment setup platform ubuntu 18.04 using conda conda env create --file envir

Sean 1 Dec 08, 2021
Employee Turnover Analysis

Employee Turnover Analysis Submission to the DataCamp competition "Can you help reduce employee turnover?"

Jannik Wiedenhaupt 1 Feb 13, 2022
Scraping and analysis of leetcode-compensations page.

Leetcode compensations report Scraping and analysis of leetcode-compensations page.

utsav 96 Jan 01, 2023
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

6 Sep 07, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities. This is aimed at those looking to get into the field of D

Joachim 1 Dec 26, 2021
Data Competition: automated systems that can detect whether people are not wearing masks or are wearing masks incorrectly

Table of contents Introduction Dataset Model & Metrics How to Run Quickstart Install Training Evaluation Detection DATA COMPETITION The COVID-19 pande

Thanh Dat Vu 1 Feb 27, 2022
PyNHD is a part of HyRiver software stack that is designed to aid in watershed analysis through web services.

A part of HyRiver software stack that provides access to NHD+ V2 data through NLDI and WaterData web services

Taher Chegini 23 Dec 14, 2022
A Python module for clustering creators of social media content into networks

sm_content_clustering A Python module for clustering creators of social media content into networks. Currently supports identifying potential networks

72 Dec 30, 2022
Exploratory Data Analysis of the 2019 Indian General Elections using a dataset from Kaggle.

2019-indian-election-eda Exploratory Data Analysis of the 2019 Indian General Elections using a dataset from Kaggle. This project is a part of the Cou

Souradeep Banerjee 5 Oct 10, 2022
A Python and R autograding solution

Otter-Grader Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is desi

Infrastructure Team 93 Jan 03, 2023
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
Python implementation of Principal Component Analysis

Principal Component Analysis Principal Component Analysis (PCA) is a dimension-reduction algorithm. The idea is to use the singular value decompositio

Ignacio Darago 1 Nov 06, 2021
Recommendations from Cramer: On the show Mad-Money (CNBC) Jim Cramer picks stocks which he recommends to buy. We will use this data to build a portfolio

Backtesting the "Cramer Effect" & Recommendations from Cramer Recommendations from Cramer: On the show Mad-Money (CNBC) Jim Cramer picks stocks which

Gábor Vecsei 12 Aug 30, 2022
Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance companies

Insurance-Fraud-Claims Detailed analysis on fraud claims in insurance companies, gives you information as to why huge loss take place in insurance com

1 Jan 27, 2022
DaCe is a parallel programming framework that takes code in Python/NumPy and other programming languages

aCe - Data-Centric Parallel Programming Decoupling domain science from performance optimization. DaCe is a parallel programming framework that takes c

SPCL 330 Dec 30, 2022
An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify.

An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify. The ETL process flows from AWS's S3 into staging tables in AWS Redshift.

1 Feb 11, 2022