A package, and script, to perform imaging transcriptomics on a neuroimaging scan.

Overview

Imaging Transcriptomics

DOI License: GPL v3 Maintainer Generic badge Documentation Status

Imaging-transcriptomics_overwiew

Imaging transcriptomics is a methodology that allows to identify patterns of correlation between gene expression and some property of brain structure or function as measured by neuroimaging (e.g., MRI, fMRI, PET).


The imaging-transcriptomics package allows performing imaging transcriptomics analysis on a neuroimaging scan (e.g., PET, MRI, fMRI...).

The software is implemented in Python3 (v.3.7), its source code is available on GitHub, it can be installed via Pypi and is released under the GPL v3 license.

NOTE Versions from v1.0.0 are or will be maintained. The original script linked by the BioRxiv preprint (v0.0) is still available on GitHub but no changes will be made to that code. If you have downloaded or used that script please update to the newer version by installing this new version.

Installation

NOTE We recommend to install the package in a dedicated environment of your choice (e.g., venv or anaconda). Once you have created your environment and you have activated it, you can follow the below guide to install the package and dependencies. This process will avoid clashes between conflicting packages that could happen during/after the installation.

To install the imaging-transcriptomics Python package, first you will need to install a packages that can't be installed directly from PyPi, but require to be downloaded from GitHub. The package to install is pypyls. To install this package you can follow the installation on the documentation for the package or simply run the command

pip install -e git+https://github.com/netneurolab/pypyls.git/#egg=pyls

to download the package, and its dependencies directly from GitHub by using pip.

Once this package is installed you can install the imaging-transcriptomics package by running

pip install imaging-transcriptomics

Usage

Once installed the software can be used in two ways:

  • as standalone script
  • as part of some python script

WARNING Before running the script make sure the Pyhton environment where you have installed the package is activated.

Standalone script


To run the standalone script from the terminal use the command:

imagingtranscriptomics options

The options available are:

  • -i (--input): Path to the imaging file to analise. The path should be given to the program as an absolute path (e.g., /Users/myusername/Documents/my_scan.nii, since a relative path could raise permission errors and crashes. The script only accepts imaging files in the NIfTI format (.nii, .nii.gz).
  • -v (--variance): Amount of variance that the PLS components must explain. This MUST be in the range 0-100.

    NOTE: if the variance given as input is in the range 0-1 the script will treat this as 30% the same way as if the number was in the range 10-100 (e.g., the script treats the inputs -v 30 and -v 0.3 in the exact same way and the resulting components will explain 30% of the variance).

  • -n (--ncomp): Number of components to be used in the PLS regression. The number MUST be in the range 1-15.
  • --corr: Run the analysis using Spearman correlation instead of PLS.

    NOTE: if you run with the --corr command no other input is required, apart from the input scan (-i).

  • -o (--output) (optional): Path where to save the results. If none is provided the results will be saved in the same directory as the input scan.

WARNING: The -i flag is MANDATORY to run the script, and so is one, and only one, of the -n or -v flags. These last two are mutually exclusive, meaning that ONLY one of the two has to be given as input.

Part of Python script


When used as part of a Python script the library can be imported as:

import imaging_transcriptomics as imt

The core class of the package is the ImagingTranscriptomics class which gives access to the methods used in the standalone script. To use the analysis in your scripts you can initialise the class and then simply call the ImagingTranscriptomics().run() method.

import numpy as np
import imaging_transcriptomics as imt
my_data = np.ones(41)  # MUST be of size 41 
                       # (corresponds to the regions in left hemisphere of the DK atlas)

analysis = imt.ImagingTranscriptomics(my_data, n_components=1)
analysis.run()
# If instead of running PLS you want to analysze the data with correlation you can run the analysis with:
analysis.run(method="corr")

Once completed the results will be part of the analysis object and can be accessed with analysis.gene_results.

The import of the imaging_transcriptomics package will import other helpful functions for input and reporting. For a complete explanation of this please refer to the official documentation of the package.

Documentation

The documentation of the script is available at imaging-transcriptomics.rtfd.io/.

Troubleshooting

For any problems with the software you can open an issue in GitHub or contact the maintainer) of the package.

Citing

If you publish work using imaging-transcriptomics as part of your analysis please cite:

Imaging transcriptomics: Convergent cellular, transcriptomic, and molecular neuroimaging signatures in the healthy adult human brain. Daniel Martins, Alessio Giacomel, Steven CR Williams, Federico Turkheimer, Ottavia Dipasquale, Mattia Veronese, PET templates working group. bioRxiv 2021.06.18.448872; doi: https://doi.org/10.1101/2021.06.18.448872

Imaging-transcriptomics: Second release update (v1.0.2).Alessio Giacomel, & Daniel Martins. (2021). Zenodo. https://doi.org/10.5281/zenodo.5726839

Comments
  • pip installation can not resolve enigmatoolbox dependencies

    pip installation can not resolve enigmatoolbox dependencies

    After pip install -e git+https://github.com/netneurolab/pypyls.git/#egg=pyls and pip install imaging-transcriptomics in a new conda environment with Python=3.8, an error was occurred when import imaging-transcriptomics package that it can't find the module named enigmatoolbox. I figured out that the enigmatoolbox package seems can not be resolve by pip installation automatically, so I have to install the enigmatoolbox package from Github manually, with the code bellow according to the document of enigmatoolbox:

    git clone https://github.com/MICA-MNI/ENIGMA.git
    cd ENIGMA
    python setup.py install
    
    opened by YCHuang0610 4
  • DK atlas regions

    DK atlas regions

    Dear alegiac95,

    thanks for providing the scripts! I have just gone through the paper and description of this GitHub repo and I want to adapt your software to my project. However, I use the typical implementation of the DK from Freesurfer with 34 cortical DK ROIs instead of the 41 ROIs that you have used and, if I'm not mistaken, 41 ROIs are required to implement the script as ist is. Is it possible to change the input to other cortical parcellations as well (i.e., DK-34)?

    Cheers, Melissa

    enhancement 
    opened by Melissa1909 3
  • Script not calling the correct python version

    Script not calling the correct python version

    The script in version v1.0.0 is invoking the #!/usr/bin/env python interpreter, which could generate some issue if you default python is python2 (e.g., in older MacOS versions).

    bug 
    opened by alegiac95 1
  • Version 1.1.0

    Version 1.1.0

    Updated the scripts with:

    • support for both full brain analysis and cortical regions only
    • GSEA analysis (both during the analysis and as a separate script)
    • pdf report of the analysis
    opened by alegiac95 0
  • clean code and fix test

    clean code and fix test

    This commit does an extensive code cleaning following the PEP8 standard. It also fixes a test that was most probably intended for previous unstable versions of the software.

    Still to do:

    • Remove logging
    opened by matteofrigo 0
  • Add mathematical background on PLS

    Add mathematical background on PLS

    A more detailed explanation on PLS model and regression is required in the docs.

    • [ ] Add a general mathematical formulation of PLS
    • [ ] Use of PLS in neuroimaging applications
    • [ ] Description of the SIMPLS algorithm used by pypls

    In addition provide some background on correlation, since it is now added to the methods available in the python package/script

    documentation 
    opened by alegiac95 0
Releases(v.1.1.8)
Detection of drones using their thermal signatures from thermal camera through YOLO-V3 based CNN with modifications to encapsulate drone motion

Drone Detection using Thermal Signature This repository highlights the work for night-time drone detection using a using an Optris PI Lightweight ther

Chong Yu Quan 6 Dec 31, 2022
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

551 Dec 29, 2022
OpenMMLab Detection Toolbox and Benchmark

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

OpenMMLab 22.5k Jan 05, 2023
A spatial genome aligner for analyzing multiplexed DNA-FISH imaging data.

jie jie is a spatial genome aligner. This package parses true chromatin imaging signal from noise by aligning signals to a reference DNA polymer model

Bojing Jia 9 Sep 29, 2022
Pre-trained Deep Learning models and demos (high quality and extremely fast)

OpenVINO™ Toolkit - Open Model Zoo repository This repository includes optimized deep learning models and a set of demos to expedite development of hi

OpenVINO Toolkit 3.4k Dec 31, 2022
Efficient training of deep recommenders on cloud.

HybridBackend Introduction HybridBackend is a training framework for deep recommenders which bridges the gap between evolving cloud infrastructure and

Alibaba 111 Dec 23, 2022
A library for augmentation of a YOLO-formated dataset

YOLO Dataset Augmentation lib Инструкция по использованию этой библиотеки Запуск всех файлов осуществлять из консоли. GoogleCrawl_to_Dataset.py Это ск

Egor Orel 1 Dec 10, 2022
Physics-informed Neural Operator for Learning Partial Differential Equation

PINO Physics-informed Neural Operator for Learning Partial Differential Equation Abstract: Machine learning methods have recently shown promise in sol

107 Jan 02, 2023
Evaluation toolkit of the informative tracking benchmark comprising 9 scenarios, 180 diverse videos, and new challenges.

Informative-tracking-benchmark Informative tracking benchmark (ITB) higher diversity. It contains 9 representative scenarios and 180 diverse videos. m

Xin Li 15 Nov 26, 2022
NeurIPS 2021 Datasets and Benchmarks Track

AP-10K: A Benchmark for Animal Pose Estimation in the Wild Introduction | Updates | Overview | Download | Training Code | Key Questions | License Intr

AP-10K 82 Dec 11, 2022
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
Global Rhythm Style Transfer Without Text Transcriptions

Global Prosody Style Transfer Without Text Transcriptions This repository provides a PyTorch implementation of AutoPST, which enables unsupervised glo

Kaizhi Qian 193 Dec 30, 2022
PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)

Asym-Siam: On the Importance of Asymmetry for Siamese Representation Learning This is a PyTorch implementation of the Asym-Siam paper, CVPR 2022: @inp

Meta Research 89 Dec 18, 2022
This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021.

Open Rule Induction This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021. Abstract Rule

Xingran Chen 16 Nov 14, 2022
Bounding Wasserstein distance with couplings

BoundWasserstein These scripts reproduce the results of the article Bounding Wasserstein distance with couplings by Niloy Biswas and Lester Mackey. ar

Niloy Biswas 1 Jan 11, 2022
Editing a Conditional Radiance Field

Editing Conditional Radiance Fields Project | Paper | Video | Demo Editing Conditional Radiance Fields Steven Liu, Xiuming Zhang, Zhoutong Zhang, Rich

Steven Liu 216 Dec 30, 2022
Code to reproduce results from the paper "AmbientGAN: Generative models from lossy measurements"

AmbientGAN: Generative models from lossy measurements This repository provides code to reproduce results from the paper AmbientGAN: Generative models

Ashish Bora 87 Oct 19, 2022
Identify the emotion of multiple speakers in an Audio Segment

MevonAI - Speech Emotion Recognition Identify the emotion of multiple speakers in a Audio Segment Report Bug · Request Feature Try the Demo Here Table

Suyash More 110 Dec 03, 2022
Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples

Source codes for Improved Few-Shot Visual Classification (CVPR 2020), Enhancing Few-Shot Image Classification with Unlabelled Examples (WACV 2022) and Beyond Simple Meta-Learning: Multi-Purpose Model

PLAI Group at UBC 42 Dec 06, 2022