A spatial genome aligner for analyzing multiplexed DNA-FISH imaging data.

Related tags

Deep Learningjie
Overview

jie

jie is a spatial genome aligner. This package parses true chromatin imaging signal from noise by aligning signals to a reference DNA polymer model.

The codename is a tribute to the Chinese homophones:

  • 结 (jié) : a knot, a nod to the mysterious and often entangled structures of DNA
  • 解 (jiĕ) : to solve, to untie, our bid to uncover these structures amid noise and uncertainty
  • 姐 (jiĕ) : sister, our ability to resolve tightly paired replicated chromatids

Installation

Step 1 - Clone this repository:

git clone https://github.com/b2jia/jie.git
cd jie

Step 2 - Create a new conda environment and install dependencies:

conda create --name jie -f environment.yml
conda activate jie

Step 3 - Install jie:

pip install -e .

To test, run:

python -W ignore test/test_jie.py

Usage

jie is an exposition of chromatin tracing using polymer physics. The main function of this package is to illustrate the utility and power of spatial genome alignment.

jie is NOT an all-purpose spatial genome aligner. Chromatin imaging is a nascent field and data collection is still being standardized. This aligner may not be compatible with different imaging protocols and data formats, among other variables.

We provide a vignette under jie/jupyter/, with emphasis on inspectability. This walks through the intuition of our spatial genome alignment and polymer fiber karyotyping routines:

00-spatial-genome-alignment-walk-thru.ipynb

We also provide a series of Jupyter notebooks (jie/jupyter/), with emphasis on reproducibility. This reproduces figures from our accompanying manuscript:

01-seqFISH-plus-mouse-ESC-spatial-genome-alignment.ipynb
02-seqFISH-plus-mouse-ESC-polymer-fiber-karyotyping.ipynb
03-seqFISH-plus-mouse-brain-spatial-genome-alignment.ipynb
04-seqFISH-plus-mouse-brain-polymer-fiber-karyotyping.ipynb
05-bench-mark-spatial-genome-agignment-against-chromatin-tracing-algorithm.ipynb

A command-line tool forthcoming.

Motivation

Multiplexed DNA-FISH is a powerful imaging technology that enables us to peer directly at the spatial location of genes inside the nucleus. Each gene appears as tiny dot under imaging.

Pivotally, figuring out which dots are physically linked would trace out the structure of chromosomes. Unfortunately, imaging is noisy, and single-cell biology is extremely variable. The two confound each other, making chromatin tracing prohibitively difficult!

For instance, in a diploid cell line with two copies of a gene we expect to see two spots. But what happens when we see:

  • Extra signals:
    • Is it noise?
      • Off-target labeling: The FISH probes might inadvertently label an off-target gene
    • Or is it biological variation?
      • Aneuploidy: A cell (ie. cancerous cell) may have more than one copy of a gene
      • Cell cycle: When a cell gets ready to divide, it duplicates its genes
  • Missing signals:
    • Is it noise?
      • Poor probe labeling: The FISH probes never labeled the intended target gene
    • Or is it biological variation?
      • Copy Number Variation: A cell may have a gene deletion

If true signal and noise are indistinguishable, how do we know we are selecting true signals during chromatin tracing? It is not obvious which spots should be connected as part of a chromatin fiber. This dilemma was first aptly characterized by Ross et al. (https://journals.aps.org/pre/abstract/10.1103/PhysRevE.86.011918), which is nothing short of prescient...!

jie is, conceptually, a spatial genome aligner that disambiguates spot selection by checking each imaged signal against a reference polymer physics model of chromatin. It relies on the key insight that the spatial separation between two genes should be congruent with its genomic separation.

It makes no assumptions about the expected copy number of a gene, and when it traces chromatin it does so instead by evaluating the physical likelihood of the chromatin fiber. In doing so, we can uncover copy number variations and even sister chromatids from multiplexed DNA-FISH imaging data.

Citation

Contact

Author: Bojing (Blair) Jia
Email: b2jia at eng dot ucsd dot edu
Position: MD-PhD Student, Ren Lab

For other work related to single-cell biology, 3D genome, and chromatin imaging, please visit Prof. Bing Ren's website: http://renlab.sdsc.edu/

Owner
Bojing Jia
How do we better describe the world around us?
Bojing Jia
This is just a funny project that we want to see AutoEncoder (AE) can actually work to enhance the features we want

Funny_muscle_enhancer :) 1.Discription: This is just a funny project that we want to see AutoEncoder (AE) can actually work on the some features. We w

Jing-Yao Chen (Jacob) 8 Oct 01, 2022
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 02, 2023
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 03, 2022
Code and hyperparameters for the paper "Generative Adversarial Networks"

Generative Adversarial Networks This repository contains the code and hyperparameters for the paper: "Generative Adversarial Networks." Ian J. Goodfel

Ian Goodfellow 3.5k Jan 08, 2023
Navigating StyleGAN2 w latent space using CLIP

Navigating StyleGAN2 w latent space using CLIP an attempt to build sth with the official SG2-ADA Pytorch impl kinda inspired by Generating Images from

Mike K. 55 Dec 06, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability PCACE is a new algorithm for ranking neurons in a CNN architecture in order

4 Jan 04, 2022
Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech

EdiTTS: Score-based Editing for Controllable Text-to-Speech Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech. Au

Neosapience 98 Dec 25, 2022
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Hao Luo 116 Jan 04, 2023
Align and Prompt: Video-and-Language Pre-training with Entity Prompts

ALPRO Align and Prompt: Video-and-Language Pre-training with Entity Prompts [Paper] Dongxu Li, Junnan Li, Hongdong Li, Juan Carlos Niebles, Steven C.H

Salesforce 127 Dec 21, 2022
Real-time ground filtering algorithm of cloud points acquired using Terrestrial Laser Scanner (TLS)

This repository contains tools to simulate the ground filtering process of a registered point cloud. The repository contains two filtering methods. The first method uses a normal vector, and fit to p

5 Aug 25, 2022
Code for the paper "Multi-task problems are not multi-objective"

Multi-Task problems are not multi-objective This is the code for the paper "Multi-Task problems are not multi-objective" in which we show that the com

Michael Ruchte 5 Aug 19, 2022
Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval.

DARP-SBIR Intro This repository contains the source code implementation for ICDM submission paper Deep Reinforced Attention Regression for Partial Ske

2 Jan 09, 2022
PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS.

PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS. With Live, you can build a working mobile app ML demo in minutes.

559 Jan 01, 2023
BLEURT is a metric for Natural Language Generation based on transfer learning.

BLEURT: a Transfer Learning-Based Metric for Natural Language Generation BLEURT is an evaluation metric for Natural Language Generation. It takes a pa

Google Research 492 Jan 05, 2023
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

44 Dec 12, 2022
Predicts an answer in yes or no.

Oui-ou-non-prediction Predicts an answer in 'yes' or 'no'. It is based on the game 'effeuiller la marguerite' in which the person plucks flower petals

Ananya Gupta 1 Jan 15, 2022