Equivariant Imaging: Learning Beyond the Range Space

Related tags

Deep LearningEI
Overview

Equivariant Imaging: Learning Beyond the Range Space

arXiv GitHub Stars

Equivariant Imaging: Learning Beyond the Range Space

Dongdong Chen, Julián Tachella, Mike E. Davies.

The University of Edinburgh

In ICCV 2021 (oral)

flexible flexible Figure: Learning to image from only measurements. Training an imaging network through just measurement consistency (MC) does not significantly improve the reconstruction over the simple pseudo-inverse (). However, by enforcing invariance in the reconstructed image set, equivariant imaging (EI) performs almost as well as a fully supervised network. Top: sparse view CT reconstruction, Bottom: pixel inpainting. PSNR is shown in top right corner of the images.

EI is a new self-supervised, end-to-end and physics-based learning framework for inverse problems with theoretical guarantees which leverages simple but fundamental priors about natural signals: symmetry and low-dimensionality.

Get quickly started

  • Please find the blog post for a quick introduction of EI.
  • Please find the core implementation of EI at './ei/closure/ei.py' (ei.py).
  • Please find the 30 lines code get_started.py and the toy cs example to get started with EI.

Overview

The problem: Imaging systems capture noisy measurements of a signal through a linear operator + . We aim to learn the reconstruction function where

  • NO groundtruth data for training as most inverse problems don’t have ground-truth;
  • only a single forward operator is available;
  • has a non-trivial nullspace (e.g. ).

The challenge:

  • We have NO information about the signal set outside the range space of or .
  • It is IMPOSSIBLE to learn the signal set using alone.

The motivation:

We assume the signal set has a low-dimensional structure and is invariant to a groups of transformations (orthgonal matrix, e.g. shift, rotation, scaling, reflection, etc.) related to a group , such that and the sets and are the same. For example,

  • natural images are shift invariant.
  • in CT/MRI data, organs can be imaged at different angles making the problem invariant to rotation.

Key observations:

  • Invariance provides access to implicit operators with potentially different range spaces: where and . Obviously, should also in the signal set.
  • The composition is equivariant to the group of transformations : .

overview Figure: Learning with and without equivariance in a toy 1D signal inpainting task. The signal set consists of different scaling of a triangular signal. On the left, the dataset does not enjoy any invariance, and hence it is not possible to learn the data distribution in the nullspace of . In this case, the network can inpaint the signal in an arbitrary way (in green), while achieving zero data consistency loss. On the right, the dataset is shift invariant. The range space of is shifted via the transformations , and the network inpaints the signal correctly.

Equivariant Imaging: to learn by using only measurements , all you need is to:

  • Define:
  1. define a transformation group based on the certain invariances to the signal set.
  2. define a neural reconstruction function , e.g. where is the (approximated) pseudo-inverse of and is a UNet-like neural net.
  • Calculate:
  1. calculate as the estimation of .
  2. calculate by transforming .
  3. calculate by reconstructing from its measurement .

flowchart

  • Train: finally learn the reconstruction function by solving: +

Requirements

All used packages are listed in the Anaconda environment.yml file. You can create an environment and run

conda env create -f environment.yml

Test

We provide the trained models used in the paper which can be downloaded at Google Drive. Please put the downloaded folder 'ckp' in the root path. Then evaluate the trained models by running

python3 demo_test_inpainting.py

and

python3 demo_test_ct.py

Train

To train EI for a given inverse problem (inpainting or CT), run

python3 demo_train.py --task 'inpainting'

or run a bash script to train the models for both CT and inpainting tasks.

bash train_paper_bash.sh

Train your models

To train your EI models on your dataset for a specific inverse problem (e.g. inpainting), run

python3 demo_train.py --h
  • Note: you may have to implement the forward model (physics) if you manage to solve a new inverse problem.
  • Note: you only need to specify some basic settings (e.g. the path of your training set).

Citation

@inproceedings{chen2021equivariant,
title = {Equivariant Imaging: Learning Beyond the Range Space},
	author={Chen, Dongdong and Tachella, Juli{\'a}n and Davies, Mike E},
	booktitle={Proceedings of the International Conference on Computer Vision (ICCV)},
	year = {2021}
}
Owner
Dongdong Chen
Machine learning, Inverse problems
Dongdong Chen
Official implementation of Protected Attribute Suppression System, ICCV 2021

Official implementation of Protected Attribute Suppression System, ICCV 2021

Prithviraj Dhar 6 Jan 01, 2023
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 03, 2023
Self-Supervised depth kalilia

Self-Supervised depth kalilia

24 Oct 15, 2022
Fully Connected DenseNet for Image Segmentation

Fully Connected DenseNets for Semantic Segmentation Fully Connected DenseNet for Image Segmentation implementation of the paper The One Hundred Layers

Somshubra Majumdar 84 Oct 31, 2022
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation

Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation Introduction WAKD is a PyTorch implementation for our ICPR-2022 pap

2 Oct 20, 2022
Classification Modeling: Probability of Default

Credit Risk Modeling in Python Introduction: If you've ever applied for a credit card or loan, you know that financial firms process your information

Aktham Momani 2 Nov 07, 2022
Some experiments with tennis player aging curves using Hilbert space GPs in PyMC. Only experimental for now.

NOTE: This is still being developed! Setup notes This document uses Jeff Sackmann's tennis data. You can obtain it as follows: git clone https://githu

Martin Ingram 1 Jan 20, 2022
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

3 Dec 05, 2022
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute dept

Pranav 39 Nov 21, 2022
Implementation of Neural Style Transfer in Pytorch

PytorchNeuralStyleTransfer Code to run Neural Style Transfer from our paper Image Style Transfer Using Convolutional Neural Networks. Also includes co

Leon Gatys 396 Dec 01, 2022
Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data

LiDAR-MOS: Moving Object Segmentation in 3D LiDAR Data This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learn

Photogrammetry & Robotics Bonn 394 Dec 29, 2022
Dialect classification

Dialect-Classification This repository presents the data that was used in a talk at ICKL-5 (5th International Conference on Kurdish Linguistics) at th

Kurdish-BLARK 0 Nov 12, 2021
An unofficial styleguide and best practices summary for PyTorch

A PyTorch Tools, best practices & Styleguide This is not an official style guide for PyTorch. This document summarizes best practices from more than a

IgorSusmelj 1.5k Jan 05, 2023
LowRankModels.jl is a julia package for modeling and fitting generalized low rank models.

LowRankModels.jl LowRankModels.jl is a Julia package for modeling and fitting generalized low rank models (GLRMs). GLRMs model a data array by a low r

Madeleine Udell 183 Dec 17, 2022
This is the official github repository of the Met dataset

The Met dataset This is the official github repository of the Met dataset. The official webpage of the dataset can be found here. What is it? This cod

Nikolaos-Antonios Ypsilantis 35 Dec 17, 2022
Awesome Transformers in Medical Imaging

This repo supplements our Survey on Transformers in Medical Imaging Fahad Shamshad, Salman Khan, Syed Waqas Zamir, Muhammad Haris Khan, Munawar Hayat,

Fahad Shamshad 666 Jan 06, 2023
Self-supervised learning (SSL) is a method of machine learning

Self-supervised learning (SSL) is a method of machine learning. It learns from unlabeled sample data. It can be regarded as an intermediate form between supervised and unsupervised learning.

Ashish Patel 4 May 26, 2022
Low-dose Digital Mammography with Deep Learning

Impact of loss functions on the performance of a deep neural network designed to restore low-dose digital mammography ====== This repository contains

WANG-AXIS 6 Dec 13, 2022