Framework for joint representation learning, evaluation through multimodal registration and comparison with image translation based approaches

Related tags

Deep LearningCoMIR
Overview

License

CoMIR: Contrastive Multimodal Image Representation for Registration Framework

🖼 Registration of images in different modalities with Deep Learning 🤖

Nicolas Pielawski, Elisabeth Wetzer, Johan Öfverstedt, Jiahao Lu, Carolina Wählby, Joakim Lindblad and Nataša Sladoje

Code of the NeurIPS 2020 paper: CoMIR: Contrastive Multimodal Image Representation for Registration

Table of Contents

Introduction

Image registration is the process by which multiple images are aligned in the same coordinate system. This is useful to extract more information than by using each individual images. We perform rigid multimodal image registration, where we succesfully align images from different microscopes, even though the information in each image is completely different.

Here are three registrations of images coming from two different microscopes (Bright-Field and Second-Harmonic Generation) as an example:

This repository gives you access to the code necessary to:

  • Train a Neural Network for converting images in a common latent space.
  • Register images that were converted in the common latent space.

How does it work?

We combined a state-of-the-art artificial neural network (tiramisu) to transform the input images into a latent space representation, which we baptized CoMIR. The CoMIRs are crafted such that they can be aligned with the help of classical registration methods.

The figure below depicts our pipeline:

Key findings of the paper

  • 📉 It is possible to use contrastive learning and integrate equivariance constraints during training.
  • 🖼 CoMIRs can be aligned succesfully using classical registration methods.
  • 🌀 The CoMIRs are rotation equivariant (youtube animation).
  • 🤖 Using GANs to generate cross-modality images, and aligning those did not work.
  • 🌱 If the weights of the CNN are initialized with a fixed seed, the trained CNN will generate very similar CoMIRs every time (correlation between 70-96%, depending on other factors).
  • 🦾 Our method performed better than Mutual Information-based registration, the previous state of the art, GANs and we often performed better than human annotators.
  • 👭 Our method requires aligned pairs of images during training, if this condition cannot be satisfied, non-learning methods (such as Mutual Information) must be used.

Datasets

We used two datasets:

Animated figures

The video below demonstrates how we achieve rotation equivariance by displaying CoMIRs originating from two neural networks. One was trained with the C4 (rotation) equivariance constrained disabled, the other one had it enabled. When enabled, the correlation between a rotated CoMIR and the non-rotated one is close to 100% for any angle.

Reproduction of the results

All the results related to the Zurich satellite images dataset can be reproduced with the train-zurich.ipynb notebook. For reproducing the results linked to the biomedical dataset follow the instructions below:

Important: for each script make sure you update the paths to load the correct datasets and export the results in your favorite directory.

Part 1. Training and testing the models

Run the notebook named train-biodata.ipynb. This repository contains a Release which contains all our trained models. If you want to skip training, you can fetch the models named model_biodata_mse.pt or model_biodata_cosine.pt and generate the CoMIRs for the test set (last cell in the notebook).

Part 2. Registration of the CoMIRs

Registration based on SIFT:

  1. Compute the SIFT registration between CoMIRs (using Fiji v1.52p):
fiji --ij2 --run scripts/compute_sift.py 'pathA="/path/*_A.tif”,pathB="/path/*_B.tif”,result=“SIFTResults.csv"'
  1. load the .csv file obtained by SIFT registration to Matlab
  2. run evaluateSIFT.m

Other results

Computing the registration with Mutual Information (using Matlab 2019b, use >2012a):

  1. run RegMI.m
  2. run Evaluation_RegMI.m

Scripts

The script folder contains scripts useful for running the experiments, but also notebooks for generating some of the figures appearing in the paper.

Citation

NeurIPS 2020

@inproceedings{pielawski2020comir,
 author = {Pielawski, Nicolas and Wetzer, Elisabeth and \"{O}fverstedt, Johan and Lu, Jiahao and W\"{a}hlby, Carolina and Lindblad, Joakim and Sladoje, Nata{\v{s}}a},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {18433--18444},
 publisher = {Curran Associates, Inc.},
 title = {{CoMIR}: Contrastive Multimodal Image Representation for Registration},
 url = {https://proceedings.neurips.cc/paper/2020/file/d6428eecbe0f7dff83fc607c5044b2b9-Paper.pdf},
 volume = {33},
 year = {2020}
}

Acknowledgements

We would like to thank Prof. Kevin Eliceiri (Laboratory for Optical and Computational Instrumentation (LOCI) at the University of Wisconsin-Madison) and his team for their support and for kindly providing the dataset of brightfield and second harmonic generation imaging of breast tissue microarray cores.

Comments
  • compute_pairwise_loss() in the code

    compute_pairwise_loss() in the code

    Hello, and thank you so much for your work! The CoMIR does enlighten me a lot. I appreciate your time so I'm trying to make my question short.

    I just have a question about the compute_pairwise_loss() function in train-biodata.ipynb. I noticed that you are using softmaxes[i] = -pos + torch.logsumexp(neg, dim=0) to compute the loss. If my understanding is correct, this corresponds to calculate

    But the InfoNCE loss mentioned in your paper is which contains the similarity of the positive pair in the denominator.

    Although there is only some slight difference between the two formulas, I'm not sure if it will lead to change of training performance. So, could you please clarify whether you are using the first formula, and why?

    opened by wxdrizzle 3
  • Questions about the training datasets

    Questions about the training datasets

    Hello! Thanks for your great contributions! However, it seems that there is only evaluation datasets. E.g. how can we get the trainning datasets of Zurich?

    opened by lajipeng 2
  • Missing Scripts

    Missing Scripts

    Hello,

    very awesome work! I was trying to reproduce your results and found that the scripts referred in " run RegMI.m run Evaluation_RegMI.m " are missing. Do you know where I could find these two programs?

    Thank you!

    opened by turnersr 2
  • backbone

    backbone

    Hi, Pielawski! The CoMIR uses dense Unets tiramisu as the backbone. However, its encoder/decoder structure is very cumbersome. Can other lightweight structures be used as the backbone for CoMIR? Thanks!

    opened by paperID2381 1
  • Bump numpy from 1.18.2 to 1.22.0

    Bump numpy from 1.18.2 to 1.22.0

    Bumps numpy from 1.18.2 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Missing Script

    Missing Script

    Hello, Very awesome work! I was trying to reproduce your results and found that the scripts referred in " run evaluateSIFT.m " are missing. Do you know where I could find this program?

    Your help would be greatly appreciated! I look forward to your reply, thank you!

    opened by chengtianxiu 1
Releases(1.0)
Owner
Methods for Image Data Analysis - MIDA
Methods for Image Data Analysis - MIDA
Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model

Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model Baris Gecer 1, Binod Bhattarai 1

Baris Gecer 190 Dec 29, 2022
LSTM-VAE Implementation and Relevant Evaluations

LSTM-VAE Implementation and Relevant Evaluations Before using any file in this repository, please create two directories under the root directory name

Lan Zhang 5 Oct 08, 2022
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Kushal Shingote 2 Feb 10, 2022
OpenMMLab Text Detection, Recognition and Understanding Toolbox

Introduction English | 简体中文 MMOCR is an open-source toolbox based on PyTorch and mmdetection for text detection, text recognition, and the correspondi

OpenMMLab 3k Jan 07, 2023
Official PyTorch implementation of the ICRA 2021 paper: Adversarial Differentiable Data Augmentation for Autonomous Systems.

Adversarial Differentiable Data Augmentation This repository provides the official PyTorch implementation of the ICRA 2021 paper: Adversarial Differen

Manli 3 Oct 15, 2022
OpenAi's gym environment wrapper to vectorize them with Ray

Ray Vector Environment Wrapper You would like to use Ray to vectorize your environment but you don't want to use RLLib ? You came to the right place !

Pierre TASSEL 15 Nov 10, 2022
Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks

Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks Stable Neural ODE with Lyapunov-Stable Equilibrium

Kang Qiyu 8 Dec 12, 2022
Code for all the Advent of Code'21 challenges mostly written in python

Advent of Code 21 Code for all the Advent of Code'21 challenges mostly written in python. They are not necessarily the best or fastest solutions but j

4 May 26, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022
Iran Open Source Hackathon

Iran Open Source Hackathon is an open-source hackathon (duh) with the aim of encouraging participation in open-source contribution amongst Iranian dev

OSS Hackathon 121 Dec 25, 2022
Toontown House CT Edition

Toontown House: Classic Toontown House Classic source that should just work. ❓ W

Open Source Toontown Servers 5 Jan 09, 2022
SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frede

Edresson Casanova 92 Dec 09, 2022
Equipped customers with insights about their EVs Hourly energy consumption and helped predict future charging behavior using LSTM model

Equipped customers with insights about their EVs Hourly energy consumption and helped predict future charging behavior using LSTM model. Designed sample dashboard with insights and recommendation for

Yash 2 Apr 07, 2022
Lecture materials for Cornell CS5785 Applied Machine Learning (Fall 2021)

Applied Machine Learning (Cornell CS5785, Fall 2021) This repo contains executable course notes and slides for the Applied ML course at Cornell and Co

Volodymyr Kuleshov 103 Dec 31, 2022
The codes reproduce the figures and statistics in the paper, "Controlling for multiple covariates," by Mark Tygert.

The accompanying codes reproduce all figures and statistics presented in "Controlling for multiple covariates" by Mark Tygert. This repository also pr

Meta Research 1 Dec 02, 2021
Code and data for ImageCoDe, a contextual vison-and-language benchmark

ImageCoDe This repository contains code and data for ImageCoDe: Image Retrieval from Contextual Descriptions. Data All collected descriptions for the

McGill NLP 27 Dec 02, 2022
"Graph Neural Controlled Differential Equations for Traffic Forecasting", AAAI 2022

Graph Neural Controlled Differential Equations for Traffic Forecasting Setup Python environment for STG-NCDE Install python environment $ conda env cr

Jeongwhan Choi 55 Dec 28, 2022
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 141 Dec 30, 2022
Bilinear attention networks for visual question answering

Bilinear Attention Networks This repository is the implementation of Bilinear Attention Networks for the visual question answering and Flickr30k Entit

Jin-Hwa Kim 506 Nov 29, 2022