A set of tools to analyse the output from TraDIS analyses

Overview

QuaTradis (Quadram TraDis)

A set of tools to analyse the output from TraDIS analyses

Contents

Introduction

The QuaTradis pipeline provides software utilities for the processing, mapping, and analysis of transposon insertion sequencing data. The pipeline was designed with the data from the TraDIS sequencing protocol in mind, but should work with a variety of transposon insertion sequencing protocols as long as they produce data in the expected format.

For more information on the TraDIS method, see http://bioinformatics.oxfordjournals.org/content/32/7/1109 and http://genome.cshlp.org/content/19/12/2308.

Installation

QuaTradis has the following dependencies:

Required dependencies

  • bwa
  • smalt
  • samtools
  • tabix

There are a number of ways to install QuaTradis and details are provided below. If you encounter an issue when installing QuaTradis please contact your local system administrator.

Bioconda

Install conda and enable the bioconda channel.

conda install -c bioconda quatradis=xxx

Docker

QuaTradis can be run in a Docker container. First install Docker, then pull the QuaTradis image from dockerhub:

docker pull quadraminstitute/quatradis

To use QuaTradis use a command like this (substituting in your directories), where your files are assumed to be stored in /home/ubuntu/data:

docker run --rm -it -v /home/ubuntu/data:/data quadraminstitute/quatradis bacteria_tradis -h

Running the tests

The test can be run with pytest from the tests directory. Alternatively you can use the make target from the top-level directory:

make test

Usage

QuaTradis provides functionality to:

  • detect TraDIS tags in a BAM file
  • add the tags to the reads
  • filter reads in a FastQ file containing a user defined tag
  • remove tags
  • map to a reference genome
  • create an insertion site plot file

The functions are available as standalone scripts or as perl modules.

Scripts

Executable scripts to carry out most of the listed functions are available in the bin:

  • check_tradis_tags - Prints 1 if tags are present in alignment file, prints 0 if not.
  • add_tradis_tags - Generates a BAM file with tags added to read strings.
  • filter_tradis_tags - Create a fastq file containing reads that match the supplied tag
  • remove_tradis_tags - Creates a fastq file containing reads with the supplied tag removed from the sequences
  • tradis_plot - Creates an gzipped insertion site plot
  • bacteria_tradis - Runs complete analysis, starting with a fastq file and produces mapped BAM files and plot files for each file in the given file list and a statistical summary of all files. Note that the -f option expects a text file containing a list of fastq files, one per line. This script can be run with or without supplying tags.

Note that default parameters are for comparative experiments, and will need to be modified for gene essentiality studies.

A help menu for each script can be accessed by running the script by adding with "--help".

Analysis Scripts

Three scripts are provided to perform basic analysis of TraDIS results in bin:

  • tradis_gene_insert_sites - Takes genome annotation in embl format along with plot files produced by bacteria_tradis and generates tab-delimited files containing gene-wise annotations of insert sites and read counts.
  • tradis_essentiality.R - Takes a single tab-delimited file from tradis_gene_insert_sites to produce calls of gene essentiality. Also produces a number of diagnostic plots.
  • tradis_comparison.R - Takes tab files to compare two growth conditions using edgeR. This analysis requires experimental replicates.

License

QuaTradis is free software, licensed under GPLv3.

Feedback/Issues

Please report any issues to the issues page or email [email protected]

Citation

If you use this software please cite:

"The TraDIS toolkit: sequencing and analysis for dense transposon mutant libraries", Barquist L, Mayho M, Cummins C, Cain AK, Boinett CJ, Page AJ, Langridge G, Quail MA, Keane JA, Parkhill J. Bioinformatics. 2016 Apr 1;32(7):1109-11. doi: 10.1093/bioinformatics/btw022. Epub 2016 Jan 21.

Comments
  • fix channel order in readme

    fix channel order in readme

    Channel order is important for bioconda to work correctly -- the conda-forge has to come first (which means higher priority when specified on the command line with -c). That might be why some users are getting pysam issues requiring a workaround.

    FYI might also want to consider suggesting --strict-channel-priority, see the new bioconda docs.

    opened by daler 1
  • Fixes for albatradis compatibility

    Fixes for albatradis compatibility

    Fixing name of analysis output files for consumption by albatradis.

    Fixing mistake when creating gene names during insertion site analysis.. Shouldn't have ignored underscores in the name.

    opened by maplesond 0
  • requirements.txt should not list bgzip

    requirements.txt should not list bgzip

    A followup to the discussion on the Bioconda PR: The requirements.txt file that you are using should not list bgzip. Names in requirements.txt refer to packages on PyPI, so if you list bgzip, you actually pull in a Python package named bgzip (that is meant to be used via import bgzip from within Python). It will not give you the bgzip binary that your project actually seems to want.

    You cannot list non-Python dependencies in requirements.txt so you can only list that dependency in the Conda recipe.

    opened by marcelm 0
  • Fixing problems running the job in docker.

    Fixing problems running the job in docker.

    The issue was that the mapping stage outputs files to the current working directory which may not have user permissions. The fix is to make sure mapping logs are output to the same place as all other output files.

    opened by maplesond 0
  • Nextflow pipeline to replace bacteria_tradis, and implementation of tradis_gene_insert_sites

    Nextflow pipeline to replace bacteria_tradis, and implementation of tradis_gene_insert_sites

    Adding nextflow to handle processing of multiple fastq files (similar to bacteria_tradis).

    Add the tradis_gene_insert_sites script, and associated functions under isp_analyse. Although there are still some very small diffs between this and old biotradis script in terms of ins_index and ins_count, which I still need to investigate.

    Renamed and refactored a few things.

    Added a few scripts to get closer to feature parity with old BioTradis.

    Tidied up README.

    opened by maplesond 0
  • problem with running tradis pipeline multiple

    problem with running tradis pipeline multiple

    Hello,

    When I try to run following command using quatradis:

    tradis pipeline multiple -v -n 12 -o quatradis_out fastqs_filtered_sizecut_all.txt genome.fa

    this error appears: Traceback (most recent call last): File "/home/jang/anaconda3/envs/mamba/envs/albatradis/bin/tradis", line 293, in main() File "/home/jang/anaconda3/envs/mamba/envs/albatradis/bin/tradis", line 285, in main args.func(args) File "/home/jang/anaconda3/envs/mamba/envs/albatradis/bin/tradis", line 202, in run_multiple_pipeline tradis.run_multi_tradis(args.fastqs, args.reference, File "/home/jang/anaconda3/envs/mamba/envs/albatradis/lib/python3.9/site-packages/quatradis/tradis.py", line 142, in run_multi_tradis pipeline = find_pipeline_file() File "/home/jang/anaconda3/envs/mamba/envs/albatradis/lib/python3.9/site-packages/quatradis/tradis.py", line 101, in find_pipeline_file if os.path.exists(exe_path): File "/home/jang/anaconda3/envs/mamba/envs/albatradis/lib/python3.9/genericpath.py", line 19, in exists os.stat(path) TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

    What I'm doing wrong?

    The same input files work smoothly in bacteria_tradis.

    Bests, Jan

    opened by gaworj 1
Owner
Quadram Institute Bioscience
Quadram Institute Bioscience
PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams

PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams Motivation When dataset freshness is critical, the annotating of high speed

4 Aug 02, 2022
Analyse the limit order book in seconds. Zoom to tick level or get yourself an overview of the trading day.

Analyse the limit order book in seconds. Zoom to tick level or get yourself an overview of the trading day. Correlate the market activity with the Apple Keynote presentations.

2 Jan 04, 2022
Transform-Invariant Non-Negative Matrix Factorization

Transform-Invariant Non-Negative Matrix Factorization A comprehensive Python package for Non-Negative Matrix Factorization (NMF) with a focus on learn

EMD Group 6 Jul 01, 2022
Import, connect and transform data into Excel

xlwings_query Import, connect and transform data into Excel. Description The concept is to apply data transformations to a main query object. When the

George Karakostas 1 Jan 19, 2022
Open-Domain Question-Answering for COVID-19 and Other Emergent Domains

Open-Domain Question-Answering for COVID-19 and Other Emergent Domains This repository contains the source code for an end-to-end open-domain question

7 Sep 27, 2022
Provide a market analysis (R)

market-study Provide a market analysis (R) - FRENCH Produisez une étude de marché Prérequis Pour effectuer ce projet, vous devrez maîtriser la manipul

1 Feb 13, 2022
MidTerm Project for the Data Analysis FT Bootcamp, Adam Tycner and Florent ZAHOUI

MidTerm Project for the Data Analysis FT Bootcamp, Adam Tycner and Florent ZAHOUI Hallo

Florent Zahoui 1 Feb 07, 2022
Clean and reusable data-sciency notebooks.

KPACUBO KPACUBO is a set Jupyter notebooks focused on the best practices in both software development and data science, namely, code reuse, explicit d

Matvey Morozov 1 Jan 28, 2022
Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video.

Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video. You can chose the cha

2 Jul 22, 2022
Python tools for querying and manipulating BIDS datasets.

PyBIDS is a Python library to centralize interactions with datasets conforming BIDS (Brain Imaging Data Structure) format.

Brain Imaging Data Structure 180 Dec 18, 2022
A stock analysis app with streamlit

StockAnalysisApp A stock analysis app with streamlit. You select the ticker of the stock and the app makes a series of analysis by using the price cha

Antonio Catalano 50 Nov 27, 2022
Hidden Markov Models in Python, with scikit-learn like API

hmmlearn hmmlearn is a set of algorithms for unsupervised learning and inference of Hidden Markov Models. For supervised learning learning of HMMs and

2.7k Jan 03, 2023
Convert tables stored as images to an usable .csv file

Convert an image of numbers to a .csv file This Python program aims to convert images of array numbers to corresponding .csv files. It uses OpenCV for

711 Dec 26, 2022
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically

About The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficien

ROOT 2k Dec 29, 2022
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

7.7k Dec 30, 2022
A columnar data container that can be compressed.

Unmaintained Package Notice Unfortunately, and due to lack of resources, the Blosc Development Team is unable to maintain this package anymore. During

944 Dec 09, 2022
Bamboolib - a GUI for pandas DataFrames

Community repository of bamboolib bamboolib is joining forces with Databricks. For more information, please read our announcement. Please note that th

Tobias Krabel 863 Jan 08, 2023
This is an example of how to automate Ridit Analysis for a dataset with large amount of questions and many item attributes

This is an example of how to automate Ridit Analysis for a dataset with large amount of questions and many item attributes

Ishan Hegde 1 Nov 17, 2021
X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

X-news - Pipeline data use scrapy, kafka, spark streaming, spark ML and elasticsearch, Kibana

Nguyễn Quang Huy 5 Sep 28, 2022
Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

Autopsy Module to analyze Registry Hives based on bookmarks provided by EricZimmerman for his tool RegistryExplorer

Mohammed Hassan 13 Mar 31, 2022