Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Related tags

Deep LearningSLATER
Overview

Official TensorFlow implementation of the unsupervised reconstruction model using zero-Shot Learned Adversarial TransformERs (SLATER). (https://arxiv.org/abs/2105.08059)

Korkmaz, Y., Dar, S. U., Yurt, M., Ozbey, M., & Cukur, T. (2021). Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. arXiv preprint arXiv:2105.08059.


Demo

The following commands are used to train and test SLATER to reconstruct undersampled MR acquisitions from single- and multi-coil datasets. You can download pretrained network snaphots and sample datasets from the links given below.

For training the MRI prior we use fully-sampled images, for testing undersampling is performed based on selected acceleration rate. We have used AdamOptimizer in training, RMSPropOptimizer with momentum parameter 0.9 in testing/inference. In the current settings AdamOptimizer is used, you can change underlying optimizer class in dnnlib/tflib/optimizer.py file. You can insert additional paramaters like momentum to the line 87 in the optimizer.py file.

Sample training command for multi-coil (fastMRI) dataset:

python run_network.py --train --gpus=0 --expname=fastmri_t1_train --dataset=fastmri-t1 --data-dir=datasets/multi-coil-datasets/train

Sample reconstruction/test command for fastMRI dataset:

python run_recon_multi_coil.py reconstruct-complex-images --network=pretrained_snapshots/fastmri-t1/network-snapshot-001282.pkl --dataset=fastmri-t1 --acc-rate=4 --contrast=t1 --data-dir=datasets/multi-coil-datasets/test

Sample training command for single-coil (IXI) dataset:

python run_network.py --train --gpus=0 --expname=ixi_t1_train --dataset=ixi_t1 --data-dir=datasets/single-coil-datasets/train

Sample reconstruction/test command for IXI dataset:

python run_recon_single_coil.py reconstruct-magnitude-images --network=pretrained_snapshots/ixi-t1/network-snapshot-001282.pkl --dataset=ixi_t1_test --acc-rate=4 --contrast=t1 --data-dir=datasets/single-coil-datasets/test

Datasets

For IXI dataset image dimensions are 256x256. For fastMRI dataset image dimensions vary with contrasts. (T1: 256x320, T2: 288x384, FLAIR: 256x320).

SLATER requires datasets in the tfrecords format. To create tfrecords file containing new datasets you can use dataset_tool.py:

To create single-coil datasets you need to give magnitude images to dataset_tool.py with create_from_images function by just giving image directory containing images in .png format. We included undersampling masks under datasets/single-coil-datasets/test.

To create multi-coil datasets you need to provide hdf5 files containing fully sampled coil-combined complex images in a variable named 'images_fs' with shape [num_of_images,x,y] (can be modified accordingly). To do this, you can use create_from_hdf5 function in dataset_tool.py.

The MRI priors are trained on coil-combined datasets that are saved in tfrecords files with a 3-channel order of [real, imaginary, dummy]. For test purposes, we included sample coil-sensitivity maps (complex variable with 4-dimensions [x,y,num_of_image,num_of_coils] named 'coil_maps') and undersampling masks (3-dimensions [x,y, num_of_image] named 'map') in the datasets/multi-coil-datasets/test folder in hdf5 format.

Coil-sensitivity-maps are estimated using ESPIRIT (http://people.eecs.berkeley.edu/~mlustig/Software.html). Network implementations use libraries from Gansformer (https://github.com/dorarad/gansformer) and Stylegan-2 (https://github.com/NVlabs/stylegan2) repositories.


Pretrained networks

You can download pretrained network snapshots and datasets from these links. You need to place downloaded folders (datasets and pretrained_snapshots folders) under the main repo to run those sample test commands given above.

Pretrained network snapshots for IXI-T1 and fastMRI-T1 can be downloaded from Google Drive: https://drive.google.com/drive/folders/1_69T1KUeSZCpKD3G37qgDyAilWynKhEc?usp=sharing

Sample training and test datasets for IXI-T1 and fastMRI-T1 can be downloaded from Google Drive: https://drive.google.com/drive/folders/1hLC8Pv7EzAH03tpHquDUuP-lLBasQ23Z?usp=sharing


Notice for training with multi-coil datasets

To train multi-coil (complex) datasets you need to remove/add some lines in training_loop.py:

  • Comment out line 8.
  • Delete comment at line 9.
  • Comment out line 23.

Citation

You are encouraged to modify/distribute this code. However, please acknowledge this code and cite the paper appropriately.

@article{korkmaz2021unsupervised,
  title={Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers},
  author={Korkmaz, Yilmaz and Dar, Salman UH and Yurt, Mahmut and {\"O}zbey, Muzaffer and {\c{C}}ukur, Tolga},
  journal={arXiv preprint arXiv:2105.08059},
  year={2021}
  }

(c) ICON Lab 2021


Prerequisites

  • Python 3.6 --
  • CuDNN 10.1 --
  • TensorFlow 1.14 or 1.15

Acknowledgements

This code uses libraries from the StyleGAN-2 (https://github.com/NVlabs/stylegan2) and Gansformer (https://github.com/dorarad/gansformer) repositories.

For questions/comments please send me an email: [email protected]


Owner
ICON Lab
ICON Lab
PyTorch implementation of the TTC algorithm

Trust-the-Critics This repository is a PyTorch implementation of the TTC algorithm and the WGAN misalignment experiments presented in Trust the Critic

0 Nov 29, 2021
Public Code for NIPS submission SimiGrad: Fine-Grained Adaptive Batching for Large ScaleTraining using Gradient Similarity Measurement

Public code for NIPS submission "SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement" This repo co

Heyang Qin 0 Oct 13, 2021
Repository for MDPGT

MD-PGT Repository for implementing and reproducing the results for the paper MDPGT: Momentum-based Decentralized Policy Gradient Tracking. Available E

Xian Yeow Lee 2 Dec 30, 2021
LETR: Line Segment Detection Using Transformers without Edges

LETR: Line Segment Detection Using Transformers without Edges Introduction This repository contains the official code and pretrained models for Line S

mlpc-ucsd 157 Jan 06, 2023
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution

DRSAN A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution Karam Park, Jae Woong Soh, and Nam Ik Cho Environments U

4 May 10, 2022
A Simple and Versatile Framework for Object Detection and Instance Recognition

SimpleDet - A Simple and Versatile Framework for Object Detection and Instance Recognition Major Features FP16 training for memory saving and up to 2.

TuSimple 3k Dec 12, 2022
Constrained Logistic Regression - How to apply specific constraints to logistic regression's coefficients

Constrained Logistic Regression Sample implementation of constructing a logistic regression with given ranges on each of the feature's coefficients (v

1 Dec 29, 2021
Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-art fuzzing techniques

About Fuzzification Fuzzification helps developers protect the released, binary-only software from attackers who are capable of applying state-of-the-

gts3.org (<a href=[email protected])"> 55 Oct 25, 2022
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 02, 2023
Vikrant Deshpande 1 Nov 17, 2022
LBK 26 Dec 28, 2022
Learning to Map Large-scale Sparse Graphs on Memristive Crossbar

Release of AutoGMap:Learning to Map Large-scale Sparse Graphs on Memristive Crossbar For reproduction of our searched model, the Ubuntu OS is recommen

2 Aug 23, 2022
The dataset of tweets pulling from Twitters with keyword: Hydroxychloroquine, location: US, Time: 2020

HCQ_Tweet_Dataset: FREE to Download. Keywords: HCQ, hydroxychloroquine, tweet, twitter, COVID-19 This dataset is associated with the paper "Understand

2 Mar 16, 2022
Informal Persian Universal Dependency Treebank

Informal Persian Universal Dependency Treebank (iPerUDT) Informal Persian Universal Dependency Treebank, consisting of 3000 sentences and 54,904 token

Roya Kabiri 0 Jan 05, 2022
code for paper -- "Seamless Satellite-image Synthesis"

Seamless Satellite-image Synthesis by Jialin Zhu and Tom Kelly. Project site. The code of our models borrows heavily from the BicycleGAN repository an

Light 14 Apr 05, 2022
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)

ILVR + ADM This is the implementation of ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral). This repository is h

Jooyoung Choi 225 Dec 28, 2022
Autoencoder - Reducing the Dimensionality of Data with Neural Network

autoencoder Implementation of the Reducing the Dimensionality of Data with Neural Network – G. E. Hinton and R. R. Salakhutdinov paper. Notes Aim to m

Jordan Burgess 13 Nov 17, 2022
TLDR: Twin Learning for Dimensionality Reduction

TLDR (Twin Learning for Dimensionality Reduction) is an unsupervised dimensionality reduction method that combines neighborhood embedding learning with the simplicity and effectiveness of recent self

NAVER 105 Dec 28, 2022