A set of tools to analyse the output from TraDIS analyses

Overview

QuaTradis (Quadram TraDis)

A set of tools to analyse the output from TraDIS analyses

Contents

Introduction

The QuaTradis pipeline provides software utilities for the processing, mapping, and analysis of transposon insertion sequencing data. The pipeline was designed with the data from the TraDIS sequencing protocol in mind, but should work with a variety of transposon insertion sequencing protocols as long as they produce data in the expected format.

For more information on the TraDIS method, see http://bioinformatics.oxfordjournals.org/content/32/7/1109 and http://genome.cshlp.org/content/19/12/2308.

Installation

QuaTradis has the following dependencies:

Required dependencies

  • bwa
  • smalt
  • samtools
  • tabix

There are a number of ways to install QuaTradis and details are provided below. If you encounter an issue when installing QuaTradis please contact your local system administrator.

Bioconda

Install conda and enable the bioconda channel.

conda install -c bioconda quatradis=xxx

Docker

QuaTradis can be run in a Docker container. First install Docker, then pull the QuaTradis image from dockerhub:

docker pull quadraminstitute/quatradis

To use QuaTradis use a command like this (substituting in your directories), where your files are assumed to be stored in /home/ubuntu/data:

docker run --rm -it -v /home/ubuntu/data:/data quadraminstitute/quatradis bacteria_tradis -h

Running the tests

The test can be run with pytest from the tests directory. Alternatively you can use the make target from the top-level directory:

make test

Usage

QuaTradis provides functionality to:

  • detect TraDIS tags in a BAM file
  • add the tags to the reads
  • filter reads in a FastQ file containing a user defined tag
  • remove tags
  • map to a reference genome
  • create an insertion site plot file

The functions are available as standalone scripts or as perl modules.

Scripts

Executable scripts to carry out most of the listed functions are available in the bin:

  • check_tradis_tags - Prints 1 if tags are present in alignment file, prints 0 if not.
  • add_tradis_tags - Generates a BAM file with tags added to read strings.
  • filter_tradis_tags - Create a fastq file containing reads that match the supplied tag
  • remove_tradis_tags - Creates a fastq file containing reads with the supplied tag removed from the sequences
  • tradis_plot - Creates an gzipped insertion site plot
  • bacteria_tradis - Runs complete analysis, starting with a fastq file and produces mapped BAM files and plot files for each file in the given file list and a statistical summary of all files. Note that the -f option expects a text file containing a list of fastq files, one per line. This script can be run with or without supplying tags.

Note that default parameters are for comparative experiments, and will need to be modified for gene essentiality studies.

A help menu for each script can be accessed by running the script by adding with "--help".

Analysis Scripts

Three scripts are provided to perform basic analysis of TraDIS results in bin:

  • tradis_gene_insert_sites - Takes genome annotation in embl format along with plot files produced by bacteria_tradis and generates tab-delimited files containing gene-wise annotations of insert sites and read counts.
  • tradis_essentiality.R - Takes a single tab-delimited file from tradis_gene_insert_sites to produce calls of gene essentiality. Also produces a number of diagnostic plots.
  • tradis_comparison.R - Takes tab files to compare two growth conditions using edgeR. This analysis requires experimental replicates.

License

QuaTradis is free software, licensed under GPLv3.

Feedback/Issues

Please report any issues to the issues page or email [email protected]

Citation

If you use this software please cite:

"The TraDIS toolkit: sequencing and analysis for dense transposon mutant libraries", Barquist L, Mayho M, Cummins C, Cain AK, Boinett CJ, Page AJ, Langridge G, Quail MA, Keane JA, Parkhill J. Bioinformatics. 2016 Apr 1;32(7):1109-11. doi: 10.1093/bioinformatics/btw022. Epub 2016 Jan 21.

Comments
  • fix channel order in readme

    fix channel order in readme

    Channel order is important for bioconda to work correctly -- the conda-forge has to come first (which means higher priority when specified on the command line with -c). That might be why some users are getting pysam issues requiring a workaround.

    FYI might also want to consider suggesting --strict-channel-priority, see the new bioconda docs.

    opened by daler 1
  • Fixes for albatradis compatibility

    Fixes for albatradis compatibility

    Fixing name of analysis output files for consumption by albatradis.

    Fixing mistake when creating gene names during insertion site analysis.. Shouldn't have ignored underscores in the name.

    opened by maplesond 0
  • requirements.txt should not list bgzip

    requirements.txt should not list bgzip

    A followup to the discussion on the Bioconda PR: The requirements.txt file that you are using should not list bgzip. Names in requirements.txt refer to packages on PyPI, so if you list bgzip, you actually pull in a Python package named bgzip (that is meant to be used via import bgzip from within Python). It will not give you the bgzip binary that your project actually seems to want.

    You cannot list non-Python dependencies in requirements.txt so you can only list that dependency in the Conda recipe.

    opened by marcelm 0
  • Fixing problems running the job in docker.

    Fixing problems running the job in docker.

    The issue was that the mapping stage outputs files to the current working directory which may not have user permissions. The fix is to make sure mapping logs are output to the same place as all other output files.

    opened by maplesond 0
  • Nextflow pipeline to replace bacteria_tradis, and implementation of tradis_gene_insert_sites

    Nextflow pipeline to replace bacteria_tradis, and implementation of tradis_gene_insert_sites

    Adding nextflow to handle processing of multiple fastq files (similar to bacteria_tradis).

    Add the tradis_gene_insert_sites script, and associated functions under isp_analyse. Although there are still some very small diffs between this and old biotradis script in terms of ins_index and ins_count, which I still need to investigate.

    Renamed and refactored a few things.

    Added a few scripts to get closer to feature parity with old BioTradis.

    Tidied up README.

    opened by maplesond 0
  • problem with running tradis pipeline multiple

    problem with running tradis pipeline multiple

    Hello,

    When I try to run following command using quatradis:

    tradis pipeline multiple -v -n 12 -o quatradis_out fastqs_filtered_sizecut_all.txt genome.fa

    this error appears: Traceback (most recent call last): File "/home/jang/anaconda3/envs/mamba/envs/albatradis/bin/tradis", line 293, in main() File "/home/jang/anaconda3/envs/mamba/envs/albatradis/bin/tradis", line 285, in main args.func(args) File "/home/jang/anaconda3/envs/mamba/envs/albatradis/bin/tradis", line 202, in run_multiple_pipeline tradis.run_multi_tradis(args.fastqs, args.reference, File "/home/jang/anaconda3/envs/mamba/envs/albatradis/lib/python3.9/site-packages/quatradis/tradis.py", line 142, in run_multi_tradis pipeline = find_pipeline_file() File "/home/jang/anaconda3/envs/mamba/envs/albatradis/lib/python3.9/site-packages/quatradis/tradis.py", line 101, in find_pipeline_file if os.path.exists(exe_path): File "/home/jang/anaconda3/envs/mamba/envs/albatradis/lib/python3.9/genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

    What I'm doing wrong?

    The same input files work smoothly in bacteria_tradis.

    Bests, Jan

    opened by gaworj 1
Owner
Quadram Institute Bioscience
Quadram Institute Bioscience
MetPy is a collection of tools in Python for reading, visualizing and performing calculations with weather data.

MetPy MetPy is a collection of tools in Python for reading, visualizing and performing calculations with weather data. MetPy follows semantic versioni

Unidata 971 Dec 25, 2022
A collection of robust and fast processing tools for parsing and analyzing web archive data.

ChatNoir Resiliparse A collection of robust and fast processing tools for parsing and analyzing web archive data. Resiliparse is part of the ChatNoir

ChatNoir 24 Nov 29, 2022
Convert monolithic Jupyter notebooks into Ploomber pipelines.

Soorgeon Join our community | Newsletter | Contact us | Blog | Website | YouTube Convert monolithic Jupyter notebooks into Ploomber pipelines. soorgeo

Ploomber 65 Dec 16, 2022
Tools for analyzing data collected with a custom unity-based VR for insects.

unityvr Tools for analyzing data collected with a custom unity-based VR for insects. Organization: The unityvr package contains the following submodul

Hannah Haberkern 1 Dec 14, 2022
🌍 Create 3d-printable STLs from satellite elevation data 🌏

mapa 🌍 Create 3d-printable STLs from satellite elevation data Installation pip install mapa Usage mapa uses numpy and numba under the hood to crunch

Fabian Gebhart 13 Dec 15, 2022
Show you how to integrate Zeppelin with Airflow

Introduction This repository is to show you how to integrate Zeppelin with Airflow. The philosophy behind the ingtegration is to make the transition f

Jeff Zhang 11 Dec 30, 2022
pyhsmm MITpyhsmm - Bayesian inference in HSMMs and HMMs. MIT

Bayesian inference in HSMMs and HMMs This is a Python library for approximate unsupervised inference in Bayesian Hidden Markov Models (HMMs) and expli

Matthew Johnson 527 Dec 04, 2022
Predictive Modeling & Analytics on Home Equity Line of Credit

Predictive Modeling & Analytics on Home Equity Line of Credit Data (Python) HMEQ Data Set In this assignment we will use Python to examine a data set

Dhaval Patel 1 Jan 09, 2022
BIGDATA SIMULATION ONE PIECE WORLD CENSUS

ONE PIECE is a Japanese manga of great international success. The story turns inhabited in a fictional world, tells the adventures of a young man whose body gained rubber properties after accidentall

Maycon Cypriano 3 Jun 30, 2022
MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

MIR Cheatsheet - Survival Guidebook for MIR Researchers in the Lab

SeungHeonDoh 3 Jul 02, 2022
Analytical view of olist e-commerce in Brazil

Analysis of E-Commerce Public Dataset by Olist The objective of this project is to propose an analytical view of olist e-commerce in Brazil. For this

Gurpreet Singh 1 Jan 11, 2022
Monitor the stability of a pandas or spark dataframe ⚙︎

Population Shift Monitoring popmon is a package that allows one to check the stability of a dataset. popmon works with both pandas and spark datasets.

ING Bank 403 Dec 07, 2022
An ETL framework + Monitoring UI/API (experimental project for learning purposes)

Fastlane An ETL framework for building pipelines, and Flask based web API/UI for monitoring pipelines. Project structure fastlane |- fastlane: (ETL fr

Dan Katz 2 Jan 06, 2022
My solution to the book A Collection of Data Science Take-Home Challenges

DS-Take-Home Solution to the book "A Collection of Data Science Take-Home Challenges". Note: Please don't contact me for the dataset. This repository

Jifu Zhao 1.5k Jan 03, 2023
pyETT: Python library for Eleven VR Table Tennis data

pyETT: Python library for Eleven VR Table Tennis data Documentation Documentation for pyETT is located at https://pyett.readthedocs.io/. Installation

Tharsis Souza 5 Nov 19, 2022
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

2 Nov 20, 2021
Generates a simple report about the current Covid-19 cases and deaths in Malaysia

Generates a simple report about the current Covid-19 cases and deaths in Malaysia. Results are delay one day, data provided by the Ministry of Health Malaysia Covid-19 public data.

Yap Khai Chuen 7 Dec 15, 2022
Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Gabriele 3 Jul 05, 2022
OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase working capital.

Overview OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase

Tom 3 Feb 12, 2022