Unsupervised Feature Loss (UFLoss) for High Fidelity Deep learning (DL)-based reconstruction

Related tags

Deep LearningUFLoss
Overview

Unsupervised Feature Loss (UFLoss) for High Fidelity Deep learning (DL)-based reconstruction

Official github repository for the paper High Fidelity Deep Learning-based MRI Reconstruction with Instance-wise Discriminative Feature Matching Loss. In this work, a novel patch-based Unsupervised Feature loss (UFLoss) is proposed and incorporated into the training of DL-based reconstruction frameworks in order to preserve perceptual similarity and high-order statistics. In-vivo experiments indicate that adding the UFLoss encourages sharper edges with higher overall image quality under DL-based reconstruction framework. Our implementations are in PyTorch

Installation

To use this package, install the required python packages (tested with python 3.8 on Ubuntu 20.04 LTS):

pip install -r requirements.txt

Dataset

We used a subset of FastMRI knee dataset for the training and evaluation. We used E-SPIRiT to pre-compute sensitivity maps using BART. Post-processed data (including Sens Maps, Coil combined images) and pre-trained model can be requested by emailing [email protected].

Update We provide our data-preprocessing code at UFloss_training/data_preprocessing.py. This script computes the sensitivity maps and performs data normalization and coil combination. BART toolbox is required for computing the sensitivity maps. Follow the installation instructions on the website and add the following lines to your .bashrc file.

/python/" export PATH=" :$PATH"">
export PYTHONPATH="${PYTHONPATH}:
    
     /python/
     "
    
export PATH="
    
     :
     $PATH
     "
    

To run the data-preprocessing code, download and unzip the fastMRI Multi-coil knee dataset. Simplu run

python data_preprocessing.py -l <path to your fastMRI multi-coil dataset> -t <target directory> -c <size for your E-SPIRiT calibration region>

Step 0: Patch Extraction

To extract patches from the fully-smapled training data, go to the UFloss_training/ folder and run patch_extraction.py to extract patches. Please specify the directories of the training dataset and the target folder. Instructions are avaible by runing:

python patch_extraction.py -h

Step 1: Train the UFLoss feature mapping network

To train the UFLoss feature mapping network, go to the UFloss_training/ folder and run patch_learning.py. We provide a demo training script to perform the training on fully-sampled patches:

bash launch_training_patch_learning.sh

Visualiztion (Patch retrival results, shown below) script will be available soon.

Step 2: Train the DL-based reconstruction with UFLoss

To train the DL-based reconstruction with UFLoss, we provide our source code here at DL_Recon_UFLoss/. We adoped MoDL as our DL-based reconstruction network. We provide training scripts for MoDL with and without UFLoss at DL_Recon_UFLoss/models/unrolled2D/scripts:

bash launch_training_MoDL_traditional_UFLoss_256_demo.sh

You can easily paly around with the parameters by editing the training script. One representative reconstruction results is shown as below.

Perform inference with the trained model

To perform the inference reconstruction on the testing set, we provide an inference script at DL_Recon_UFLoss/models/unrolled2D/inference_ufloss.py. run the following command for inference:

python inference_ufloss.py --data-path <Path to the dataset> 
                        --device-num <Which device to train on>
                        --exp-dir <Path where the results should be saved>
                        --checkpoint <Path to an existing checkpoint>

Acknoledgements

Reconstruction code borrows heavily from fastMRI Github repo and DL-ESPIRiT by Christopher Sandino. This work is a colaboration between UC Berkeley and GE Healthcare. Please contact [email protected] if you have any questions.

Citation

If you find this code useful for your research, please consider citing our paper High Fidelity Deep Learning-based MRI Reconstruction with Instance-wise Discriminative Feature Matching Loss:

@article{wang2021high,
  title={High Fidelity Deep Learning-based MRI Reconstruction with Instance-wise Discriminative Feature Matching Loss},
  author={Wang, Ke and Tamir, Jonathan I and De Goyeneche, Alfredo and Wollner, Uri and Brada, Rafi and Yu, Stella and Lustig, Michael},
  journal={arXiv preprint arXiv:2108.12460},
  year={2021}
}
LSSY量化交易系统

LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开

55 Oct 04, 2022
L-Verse: Bidirectional Generation Between Image and Text

Far beyond learning long-range interactions of natural language, transformers are becoming the de-facto standard for many vision tasks with their power and scalabilty

Kim, Taehoon 102 Dec 21, 2022
Normalizing Flows with a resampled base distribution

Resampling Base Distributions of Normalizing Flows Normalizing flows are a popular class of models for approximating probability distributions. Howeve

Vincent Stimper 24 Nov 03, 2022
🔥3D-RecGAN in Tensorflow (ICCV Workshops 2017)

3D Object Reconstruction from a Single Depth View with Adversarial Learning Bo Yang, Hongkai Wen, Sen Wang, Ronald Clark, Andrew Markham, Niki Trigoni

Bo Yang 125 Nov 26, 2022
MADT: Offline Pre-trained Multi-Agent Decision Transformer

MADT: Offline Pre-trained Multi-Agent Decision Transformer A link to our paper can be found on Arxiv. Overview Official codebase for Offline Pre-train

Linghui Meng 51 Dec 21, 2022
Negative Interactions for Improved Collaborative Filtering:

Negative Interactions for Improved Collaborative Filtering: Don’t go Deeper, go Higher This notebook provides an implementation in Python 3 of the alg

Harald Steck 21 Mar 05, 2022
All public open-source implementations of convnets benchmarks

convnet-benchmarks Easy benchmarking of all public open-source implementations of convnets. A summary is provided in the section below. Machine: 6-cor

Soumith Chintala 2.7k Dec 30, 2022
Spatial Contrastive Learning for Few-Shot Classification (SCL)

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image class

Yassine 34 Dec 25, 2022
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Tianyu Ding 95 Dec 04, 2022
Github Traffic Insights as Prometheus metrics.

github-traffic Github Traffic collects your repository's traffic data and exposes it as Prometheus metrics. Grafana dashboard that displays the metric

Grafana Labs 34 Oct 27, 2022
A stable algorithm for GAN training

DRAGAN (Deep Regret Analytic Generative Adversarial Networks) Link to our paper - https://arxiv.org/abs/1705.07215 Pytorch implementation (thanks!) -

195 Oct 10, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
Data cleaning, missing value handle, EDA use in this project

Lending Club Case Study Project Brief Solving this assignment will give you an idea about how real business problems are solved using EDA. In this cas

Dhruvil Sheth 1 Jan 05, 2022
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022
This repository is an implementation of our NeurIPS 2021 paper (Stylized Dialogue Generation with Multi-Pass Dual Learning) in PyTorch.

MPDL---TODO This repository is an implementation of our NeurIPS 2021 paper (Stylized Dialogue Generation with Multi-Pass Dual Learning) in PyTorch. Ci

CodebaseLi 3 Nov 27, 2022
Generating Videos with Scene Dynamics

Generating Videos with Scene Dynamics This repository contains an implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirs

Carl Vondrick 706 Jan 04, 2023
SW components and demos for visual kinship recognition. An emphasis is put on the FIW dataset-- data loaders, benchmarks, results in summary.

FIW Data Development Kit Table of Contents Introduction Families In the Wild Database Publications Organization To Do License Getting Involved Introdu

Joseph P. Robinson 12 Jun 04, 2022
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

RNN-for-Joint-NLU Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

Kim SungDong 194 Dec 28, 2022
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 08, 2023
Bayesian inference for Permuton-induced Chinese Restaurant Process (NeurIPS2021).

Permuton-induced Chinese Restaurant Process Note: Currently only the Matlab version is available, but a Python version will be available soon! This is

NTT Communication Science Laboratories 3 Dec 17, 2022