Image-to-image regression with uncertainty quantification in PyTorch

Overview

im2im-uq

A platform for image-to-image regression with rigorous, distribution-free uncertainty quantification.


An algorithmic MRI reconstruction with uncertainty. A rapidly acquired but undersampled MR image of a knee (A) is fed into a model that predicts a sharp reconstruction (B) along with a calibrated notion of uncertainty (C). In (C), red means high uncertainty and blue means low uncertainty. Wherever the reconstruction contains hallucinations, the uncertainty is high; see the hallucination in the image patch (E), which has high uncertainty in (F), and does not exist in the ground truth (G).

Summary

This repository provides a convenient way to train deep-learning models in PyTorch for image-to-image regression---any task where the input and output are both images---along with rigorous uncertainty quantification. The uncertainty quantification takes the form of an interval for each pixel which is guaranteed to contain most true pixel values with high-probability no matter the choice of model or the dataset used (it is a risk-controlling prediction set). The training pipeline is already built to handle more than one GPU and all training/calibration should run automatically.

The basic workflow is

  • Define your dataset in core/datasets/.
  • Create a folder for your experiment experiments/new_experiment, along with a file experiments/new_experiment/config.yml defining the model architecture, hyperparameters, and method of uncertainty quantification. You can use experiments/fastmri_test/config.yml as a template.
  • Edit core/scripts/router.py to point to your data directory.
  • From the root folder, run wandb sweep experiments/new_experiment/config.yml, and run the resulting sweep.
  • After the sweep is complete, models will be saved in experiments/new_experiment/checkpoints, the metrics will be printed to the terminal, and outputs will be in experiments/new_experiment/output/. See experiments/fastmri_test/plot.py for an example of how to make plots from the raw outputs.

Following this procedure will train one or more models (depending on config.yml) that perform image-to-image regression with rigorous uncertainty quantification.

There are two pre-baked examples that you can run on your own after downloading the open-source data: experiments/fastmri_test/config.yml and experiments/temca_test/config.yml. The third pre-baked example, experiments/bsbcm_test/config.yml, reiles on data collected at Berkeley that has not yet been publicly released (but will be soon).

Paper

Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging

@article{angelopoulos2022image,
  title={Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging},
  author={Angelopoulos, Anastasios N and Kohli, Amit P and Bates, Stephen and Jordan, Michael I and Malik, Jitendra and Alshaabi, Thayer and Upadhyayula, Srigokul and Romano, Yaniv},
  journal={arXiv preprint arXiv:2202.05265},
  year={2022}
}

Installation

You will need to execute

conda env create -f environment.yml
conda activate im2im-uq

You will also need to go through the Weights and Biases setup process that initiates when you run your first sweep. You may need to make an account on their website.

Reproducing the results

FastMRI dataset

  • Download the FastMRI dataset to your machine and unzip it. We worked with the knee_singlecoil_train dataset.
  • Edit Line 71 of core/scripts/router to point to the your local dataset.
  • From the root folder, run wandb sweep experiments/fastmri_test/config.yml
  • After the run is complete, run cd experiments/fastmri_test/plot.py to plot the results.

TEMCA2 dataset

  • Download the TEMCA2 dataset to your machine and unzip it. We worked with sections 3501 through 3839.
  • Edit Line 78 of core/scripts/router to point to the your local dataset.
  • From the root folder, run wandb sweep experiments/temca_test/config.yml
  • After the run is complete, run cd experiments/temca_test/plot.py to plot the results.

Adding a new experiment

If you want to extend this code to a new experiment, you will need to write some code compatible with our infrastructure. If adding a new dataset, you will need to write a valid PyTorch dataset object; you need to add a new model architecture, you will need to specify it; and so on. Usually, you will want to start by creating a folder experiments/new_experiment along with a config file experiments/new_experiment/config.yml. The easiest way is to start from an existing config, like experiments/fastmri_test/config.yml.

Adding new datasets

To add a new dataset, use the following procedure.

  • Download the dataset to your machine.
  • In core/datasets, make a new folder for your dataset core/datasets/new_dataset.
  • Make a valid PyTorch Dataset class for your new dataset. The most critical part is writing a __get_item__ method that returns an image-image pair in CxHxW order; see core/datasets/bsbcm/BSBCMDataset.py for a simple example.
  • Make a file core/datasets/new_dataset/__init__.py and export your dataset by adding the line from .NewDataset.py import NewDatasetClass (substituting in your filename and classname appropriately).
  • Edit core/scripts/router.py to load your new dataset, near Line 64, following the pattern therein. You will also need to import your dataset object.
  • Populate your new config file experiments/new_experiment/config.yml with the correct directories and experiment name.
  • Execute wandb sweep experiments/new_experiment/config.yml and proceed as normal!

Adding new models

In our system, there are two parts to a model---the base architecture, which we call a trunk (e.g. a U-Net), and the final layer. Defining a trunk is as simple as writing a regular PyTorch nn.module and adding it near Line 87 of core/scripts/router.py (you will also need to import it); see core/models/trunks/unet.py for an example.

The process for adding a final layer is a bit more involved. The final layer is simply a Pytorch nn.module, but it also must come with two functions: a loss function and a nested prediction set function. See core/models/finallayers/quantile_layer.py for an example. The steps are:

  • Create a final layer nn.module object. The final layer should also have a heuristic notion of uncertainty built in, like quantile outputs.
  • Specify the loss function is used to train a network with this final layer.
  • Specify a nested prediction set function that uses output of the final layer to form a prediction set. The prediction set should scale up and down with a free factor lam, which will later be calibrated. The function should have the same prototype as that on Line 34 of core/models/finallayers/quantile_layer.py for an example.
  • After creating the new final layer and related functions, add it to core/models/add_uncertainty.py as in Line 59.
  • Edit wandb sweep experiments/new_experiment/config.yml to include your new final layer, and run the sweep as normal!
Owner
Anastasios Angelopoulos
Ph.D. student at UC Berkeley AI Research.
Anastasios Angelopoulos
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

37 Nov 21, 2022
Weakly supervised medical named entity classification

Trove Trove is a research framework for building weakly supervised (bio)medical named entity recognition (NER) and other entity attribute classifiers

60 Nov 18, 2022
Code for ViTAS_Vision Transformer Architecture Search

Vision Transformer Architecture Search This repository open source the code for ViTAS: Vision Transformer Architecture Search. ViTAS aims to search fo

46 Dec 17, 2022
Efficient Lottery Ticket Finding: Less Data is More

The lottery ticket hypothesis (LTH) reveals the existence of winning tickets (sparse but critical subnetworks) for dense networks, that can be trained in isolation from random initialization to match

VITA 20 Sep 04, 2022
Yggdrasil - A simplistic bot designed to streamline your server experience

Ygggdrasil A simplistic bot designed to streamline your server experience. Desig

Sntx_ 1 Dec 14, 2022
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022
This repository contains PyTorch models for SpecTr (Spectral Transformer).

SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation This repository contains PyTorch models for SpecTr (Spectral Transformer).

Boxiang Yun 45 Dec 13, 2022
Re-TACRED: Addressing Shortcomings of the TACRED Dataset

Re-TACRED Re-TACRED: Addressing Shortcomings of the TACRED Dataset

George Stoica 40 Dec 10, 2022
🛠️ Tools for Transformers compression using Lightning ⚡

Bert-squeeze is a repository aiming to provide code to reduce the size of Transformer-based models or decrease their latency at inference time.

Jules Belveze 66 Dec 11, 2022
Pseudo lidar - (CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving This paper has been accpeted by Conference o

Yan Wang 881 Dec 27, 2022
Repository for GNSS-based position estimation using a Deep Neural Network

Code repository accompanying our work on 'Improving GNSS Positioning using Neural Network-based Corrections'. In this paper, we present a Deep Neural

32 Dec 13, 2022
Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

YicongHong 34 Nov 15, 2022
A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)

Library | Paper | Slack We released two versions of OAG-BERT in CogDL package. OAG-BERT is a heterogeneous entity-augmented academic language model wh

THUDM 58 Dec 17, 2022
Learning to Estimate Hidden Motions with Global Motion Aggregation

Learning to Estimate Hidden Motions with Global Motion Aggregation (GMA) This repository contains the source code for our paper: Learning to Estimate

Shihao Jiang (Zac) 221 Dec 18, 2022
Turning pixels into virtual points for multimodal 3D object detection.

Multimodal Virtual Point 3D Detection Turning pixels into virtual points for multimodal 3D object detection. Multimodal Virtual Point 3D Detection, Ti

Tianwei Yin 204 Jan 08, 2023
给yolov5加个gui界面,使用pyqt5,yolov5是5.0版本

博文地址 https://xugaoxiang.com/2021/06/30/yolov5-pyqt5 代码执行 项目中使用YOLOv5的v5.0版本,界面文件是project.ui pip install -r requirements.txt python main.py 图片检测 视频检测

Xu GaoXiang 215 Dec 30, 2022
Modification of convolutional neural net "UNET" for image segmentation in Keras framework

ZF_UNET_224 Pretrained Model Modification of convolutional neural net "UNET" for image segmentation in Keras framework Requirements Python 3.*, Keras

209 Nov 02, 2022
Semi-supervised Learning for Sentiment Analysis

Neural-Semi-supervised-Learning-for-Text-Classification-Under-Large-Scale-Pretraining Code, models and Datasets for《Neural Semi-supervised Learning fo

47 Jan 01, 2023
[NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"

Qu-ANTI-zation This repository contains the code for reproducing the results of our paper: Qu-ANTI-zation: Exploiting Quantization Artifacts for Achie

Secure AI Systems Lab 8 Mar 26, 2022
A small tool to joint picture including gif

README 做设计的时候遇到拼接长图的情况,但是发现没有什么好用的能拼接gif的工具。 于是自己写了个gif拼接小工具。 可以自动拼接gif、png和jpg等常见格式。 效果 从上至下 从下至上 从左至右 从右至左 使用 克隆仓库 git clone https://github.com/Dels

3 Dec 15, 2021