A framework for joint super-resolution and image synthesis, without requiring real training data

Related tags

Deep LearningSynthSR
Overview

SynthSR

This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The method can also be configured to achieve denoising and bias field correction.

The network takes synthetic scans generated on the fly as inputs, and can be trained to regress either real or synthetic target scans. The synthetic scans are obtained by sampling a generative model building on the SynthSeg [1] package, which we really encourage you to have a look at!


In short, synthetic scans are generated at each mini-batch by: 1) randomly selecting a label map among of pool of training segmentations, 2) spatially deforming it in 3D, 3) sampling a Gaussian Mixture Model (GMM) conditioned on the deformed label map (see Figure 1 below), and 4) corrupting with a random bias field. This gives us a synthetic scan at high resolution (HR). We then simulate thick slice spacing by blurring and downsampling it to low resolution (LR). In SR, we then train a network to learn the mapping between LR data (possibly multimodal, hence the joint synthesis) and HR synthetic scans. Moreover If real images are available along with the training label maps, we can learn to regress the real images instead.


Training overview Figure 1: overview of SynthSR


Tutorials for Generation and Training

This repository contains code to train your own network for SR or joint SR and synthesis. Because the training function has a lot of options, we provide here some tutorials to familiarise yourself with the different training/generation parameters. We emphasise that we provide example training data along with these scripts: 5 preprocessed publicly available T1 scans at 1mm isotropic resolution [2] with corresponding label maps obtained with FreeSurfer [3]. The tutorials can be found in scripts, and they include:

  • Six generation scripts corresponding to different use cases (see Figure 2 below). We recommend to go through them all, (even if you're only interested in case 1), since we successively introduce different functionalities as we go through.

  • One training script, explaining the main training parameters.

  • One script explaining how to estimate the parameters governing the GMM, in case you wish to train a model on your own data.


Training overview Figure 2: Examples generated by running the tutorials on the provided data [2]. For each use case, we show the synhtetic images used as inputs to the network, as well as the regression target.


Content

  • SynthSR: this is the main folder containing the generative model and training function:

    • labels_to_image_model.py: builds the generative model.

    • brain_generator.py: contains the class BrainGenerator, which is a wrapper around the model. New images can simply be generated by instantiating an object of this class, and calling the method generate_image().

    • model_inputs.py: prepares the inputs of the generative model.

    • training.py: contains the function to train the network. All training parameters are explained there.

    • metrics_model.py: contains a Keras model that implements diffrent loss functions.

    • estimate_priors.py: contains functions to estimate the prior distributions of the GMM parameters.

  • data: this folder contains the data for the tutorials (T1 scans [2], corresponding FreeSurfer segmentations and some other useful files)

  • script: additionally to the tutorials, we also provide a script to launch trainings from the terminal

  • ext: contains external packages.


Requirements

This code relies on several external packages (already included in \ext):

  • lab2im: contains functions for data augmentation, and a simple version of the generative model, on which we build to build label_to_image_model [1]

  • neuron: contains functions for deforming, and resizing tensors, as well as functions to build the segmentation network [4,5].

  • pytool-lib: library required by the neuron package.

All the other requirements are listed in requirements.txt. We list here the most important dependencies:

  • tensorflow-gpu 2.0
  • tensorflow_probability 0.8
  • keras > 2.0
  • cuda 10.0 (required by tensorflow)
  • cudnn 7.0
  • nibabel
  • numpy, scipy, sklearn, tqdm, pillow, matplotlib, ipython, ...

Citation/Contact

This repository contains the code related to a submission that is still under review.

If you have any question regarding the usage of this code, or any suggestions to improve it you can contact us at:
[email protected]


References

[1] A Learning Strategy for Contrast-agnostic MRI Segmentation
Benjamin Billot, Douglas N. Greve, Koen Van Leemput, Bruce Fischl, Juan Eugenio Iglesias*, Adrian V. Dalca*
*contributed equally
MIDL 2020

[2] A novel in vivo atlas of human hippocampal subfields usinghigh-resolution 3 T magnetic resonance imaging
J. Winterburn, J. Pruessner, S. Chavez, M. Schira, N. Lobaugh, A. Voineskos, M. Chakravarty
NeuroImage (2013)

[3] FreeSurfer
Bruce Fischl
NeuroImage (2012)

[4] Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
CVPR 2018

[5] Unsupervised Data Imputation via Variational Inference of Deep Subspaces
Adrian V. Dalca, John Guttag, Mert R. Sabuncu
Arxiv preprint (2019)

Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.

Conditional Smiles! (SmileCVAE) About Implementation of AE, VAE and CVAE. Trained CVAE on faces from UTKFace Dataset. Using an encoding of the Smile-s

Raúl Ortega 3 Jan 09, 2022
Benchmarks for semi-supervised domain generalization.

Semi-Supervised Domain Generalization This code is the official implementation of the following paper: Semi-Supervised Domain Generalization with Stoc

Kaiyang 49 Dec 10, 2022
Official PyTorch implementation of "The Center of Attention: Center-Keypoint Grouping via Attention for Multi-Person Pose Estimation" (ICCV 21).

CenterGroup This the official implementation of our ICCV 2021 paper The Center of Attention: Center-Keypoint Grouping via Attention for Multi-Person P

Dynamic Vision and Learning Group 43 Dec 25, 2022
Official PyTorch implementation of PS-KD

Self-Knowledge Distillation with Progressive Refinement of Targets (PS-KD) Accepted at ICCV 2021, oral presentation Official PyTorch implementation of

61 Dec 28, 2022
QT Py Media Knob using rotary encoder & neopixel ring

QTPy-Knob QT Py USB Media Knob using rotary encoder & neopixel ring The QTPy-Knob features: Media knob for volume up/down/mute with "qtpy-knob.py" Cir

Tod E. Kurt 56 Dec 30, 2022
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Rex Cheng 364 Jan 03, 2023
Graduation Project

Gesture-Detection-and-Depth-Estimation This is my graduation project. (1) In this project, I use the YOLOv3 object detection model to detect gesture i

ChaosAT 1 Nov 23, 2021
Winners of DrivenData's Overhead Geopose Challenge

Winners of DrivenData's Overhead Geopose Challenge

DrivenData 22 Aug 04, 2022
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.

CLIP-Guided-Diffusion Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab. Original colab notebooks by Ka

Nerdy Rodent 336 Dec 09, 2022
Implementation for Panoptic-PolarNet (CVPR 2021)

Panoptic-PolarNet This is the official implementation of Panoptic-PolarNet. [ArXiv paper] Introduction Panoptic-PolarNet is a fast and robust LiDAR po

Zixiang Zhou 126 Jan 01, 2023
Simultaneous Detection and Segmentation

Simultaneous Detection and Segmentation This is code for the ECCV Paper: Simultaneous Detection and Segmentation Bharath Hariharan, Pablo Arbelaez,

Bharath Hariharan 96 Jul 20, 2022
🏅 The Most Comprehensive List of Kaggle Solutions and Ideas 🏅

🏅 Collection of Kaggle Solutions and Ideas 🏅

Farid Rashidi 2.3k Jan 08, 2023
利用Tensorflow实现基于CNN的中文短文本分类

Text Classification with CNN 使用卷积神经网络进行中文文本分类 CNN做句子分类的论文可以参看: Convolutional Neural Networks for Sentence Classification 还可以去读dennybritz大牛的博客:Implemen

Jeremiah 4 Nov 08, 2022
I will implement Fastai in each projects present in this repository.

DEEP LEARNING FOR CODERS WITH FASTAI AND PYTORCH The repository contains a list of the projects which I have worked on while reading the book Deep Lea

Thinam Tamang 43 Dec 20, 2022
Self-Supervised CNN-GCN Autoencoder

GCNDepth Self-Supervised CNN-GCN Autoencoder GCNDepth: Self-supervised monocular depth estimation based on graph convolutional network To be published

53 Dec 14, 2022
Based on the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral

Geometry-aware Instance-reweighted Adversarial Training This repository provides codes for Geometry-aware Instance-reweighted Adversarial Training (ht

Jingfeng 47 Dec 22, 2022
arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

Andrej 671 Dec 31, 2022
Predicting a person's gender based on their weight and height

Logistic Regression Advanced Case Study Gender Classification: Predicting a person's gender based on their weight and height 1. Introduction We turn o

1 Feb 01, 2022
Auxiliary Raw Net (ARawNet) is a ASVSpoof detection model taking both raw waveform and handcrafted features as inputs, to balance the trade-off between performance and model complexity.

Overview This repository is an implementation of the Auxiliary Raw Net (ARawNet), which is ASVSpoof detection system taking both raw waveform and hand

6 Jul 08, 2022
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022