NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go

Overview

NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go

This repository provides our implementation of the CVPR 2021 paper NeuroMorph. Our algorithm produces in one go, i.e., in a single feed forward pass, a smooth interpolation and point-to-point correspondences between two input 3D shapes. It is learned in a self-supervised manner from an unlabelled collection of deformable and heterogeneous shapes.

If you use our work, please cite:

@inproceedings{eisenberger2021neuromorph, 
  title={NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go}, 
  author={Eisenberger, Marvin and Novotny, David and Kerchenbaum, Gael and Labatut, Patrick and Neverova, Natalia and Cremers, Daniel and Vedaldi, Andrea}, 
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, 
  pages={7473--7483}, 
  year={2021}
}

Requirements

The code was tested on Python 3.8.10 with the PyTorch version 1.9.1 and CUDA 10.2. The code also requires the pytorch-geometric library (installation instructions) and matplotlib. Finally, MATLAB with the Statistics and Machine Learning Toolbox is used to pre-process ceratin datasets (we tested MATLAB versions 2019b and 2021b). The code should run on Linux, macOS and Windows.

Installing NeuroMorph

Using Anaconda, you can install the required dependencies as follows:

conda create -n neuromorph python=3.8
conda activate neuromorph
conda install pytorch cudatoolkit=10.2 -c pytorch
conda install matplotlib
conda install pyg -c pyg -c conda-forge

Running NeuroMorph

In order to run NeuroMorph:

  • Specify the location of datasets on your device under data_folder_ in param.py.
  • To use your own data, create a new dataset in data/data.py.
  • To train FAUST remeshed, run the main script main_train.py. Modify the script as needed to train on different data.

For a more detailed tutorial, see the next section.

Reproducing the experiments

We show below how to reproduce the experiments on the FAUST remeshed data.

Data download

You can download experimental mesh data from here from the authors of the Deep Geometric Functional Maps. Download the FAUST_r.zip file from this site, unzip it, and move the content of the directory to /data/mesh/FAUST_r .

Data preprocessing

Meshes must be subsampled and remeshed (for data augmentation during training) and geodesic distance matrices must be computed before the learning code runs. For this, we use the data_preprocessing/preprocess_dataset.m MATLAB scripts (we tested V2019b and V2021b).

Start MATLAB and do the following:

cd 
   
    /data_preprocessing
   
preprocess_dataset("../data/meshes/FAUST_r/", ".off")

The result should be a list of MATLAB mesh files in a mat subfolder (e.g., data/meshes/FAUST_r/mat ), plus additional data.

Model training

If you stored the data in the directory given above, you can train the model by running:

mkdir -p data/{checkpoint,out}
python main_train.py

The trained models will be saved in a series of checkpoints at /data/out/ . Otherwise, edit param.py to change the paths.

Model testing

Upon completion, evaluate the trained model with main_test.py . Specify the checkpoint folder name by running:

python main_test.py <TIME_STAMP_FAUST>

Here is any of the directories saved in /data/out/ . This automatically saves correspondences and interpolations on the FAUST remeshed test set to /data/out/ . For reference, on FAUST you should expect a validation error around 0.25 after 400 epochs.

Contributing

See the CONTRIBUTING file for how to help out.

License

NeuroMorph is MIT licensed, as described in the LICENSE file. NeuroMorph includes a few files from other open source projects, as further detailed in the same LICENSE file.

Owner
Meta Research
Meta Research
LRBoost is a scikit-learn compatible approach to performing linear residual based stacking/boosting.

LRBoost is a sckit-learn compatible package for linear residual boosting. LRBoost combines a linear estimator and a non-linear estimator to leverage t

Andrew Patton 5 Nov 23, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

195 Dec 07, 2022
Garbage Detection system which will detect objects based on whether it is plastic waste or plastics or just garbage.

Garbage Detection using Yolov5 on Jetson Nano 2gb Developer Kit. Garbage detection system which will detect objects based on whether it is plastic was

Rishikesh A. Bondade 2 May 13, 2022
This project is for a Twitter bot that monitors a bird feeder in my backyard. Any detected birds are identified and posted to Twitter.

Backyard Birdbot Introduction This is a silly hobby project to use existing ML models to: Detect any birds sighted by a webcam Identify whic

Chi Young Moon 71 Dec 25, 2022
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization

sam.pytorch A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization ( Foret+2020) Paper, Official implementa

Ryuichiro Hataya 102 Dec 28, 2022
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Tianyu Ding 95 Dec 04, 2022
Small repo describing how to use Hugging Face's Wav2Vec2 with PyCTCDecode

🤗 Transformers Wav2Vec2 + PyCTCDecode Introduction This repo shows how 🤗 Transformers can be used in combination with kensho-technologies's PyCTCDec

Patrick von Platen 102 Oct 22, 2022
Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery

Lorien: A Unified Infrastructure for Efficient Deep Learning Workloads Delivery Lorien is an infrastructure to massively explore/benchmark the best sc

Amazon Web Services - Labs 45 Dec 12, 2022
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

Daniel Bourke 3.4k Jan 07, 2023
PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi

PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi PIKA is a lightweight speech processing toolkit based on Pytorch and (Py)

336 Nov 25, 2022
PyTorch implementation of DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images

DARDet PyTorch implementation of "DARDet: A Dense Anchor-free Rotated Object Detector in Aerial Images", [pdf]. Highlights: 1. We develop a new dense

41 Oct 23, 2022
Official implementation of "CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding" (CVPR, 2022)

CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding (CVPR'22) Paper Link | Project Page Abstract : Manual an

Mohamed Afham 152 Dec 23, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
Code for our paper A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization,

FSRA This repository contains the dataset link and the code for our paper A Transformer-Based Feature Segmentation and Region Alignment Method For UAV

Dmmm 32 Dec 18, 2022
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022
Applying curriculum to meta-learning for few shot classification

Curriculum Meta-Learning for Few-shot Classification We propose an adaptation of the curriculum training framework, applicable to state-of-the-art met

Stergiadis Manos 3 Oct 25, 2022
Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.

Think Bayes 2 by Allen B. Downey The HTML version of this book is here. Think Bayes is an introduction to Bayesian statistics using computational meth

Allen Downey 1.5k Jan 08, 2023
Causal Imitative Model for Autonomous Driving

Causal Imitative Model for Autonomous Driving Mohammad Reza Samsami, Mohammadhossein Bahari, Saber Salehkaleybar, Alexandre Alahi. arXiv 2021. [Projec

VITA lab at EPFL 8 Oct 04, 2022
Python implementation of MULTIseq barcode alignment using fuzzy string matching and GMM barcode assignment

Python implementation of MULTIseq barcode alignment using fuzzy string matching and GMM barcode assignment.

MT Schmitz 2 Feb 11, 2022
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"

The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow

4 Apr 20, 2022