Code for "Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency" paper

Overview

UNICORN 🦄

Webpage | Paper | BibTex

car.gif bird.gif moto.gif

PyTorch implementation of "Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency" paper, check out our webpage for details!

If you find this code useful, don't forget to star the repo and cite the paper:

@article{monnier2022unicorn,
  title={{Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance 
  Consistency}},
  author={Monnier, Tom and Fisher, Matthew and Efros, Alexei A and Aubry, Mathieu},
  journal={arXiv:2204.10310 [cs]},
  year={2022},
}

Installation 👷

1. Create conda environment 🔧

conda env create -f environment.yml
conda activate unicorn

Optional: some monitoring routines are implemented, you can use them by specifying your visdom port in the config file. You will need to install visdom from source beforehand

git clone https://github.com/facebookresearch/visdom
cd visdom && pip install -e .

2. Download datasets ⬇️

bash scripts/download_data.sh

This command will download one of the following datasets:

3. Download pretrained models ⬇️

bash scripts/download_model.sh

This command will download one of the following models:

NB: it may happen that gdown hangs, if so you can download them manually with the gdrive links and move them to the models folder.

How to use 🚀

1. 3D reconstruction of car images 🚘

ex_car.png ex_rec.gif

You first need to download the car model (see above), then launch:

cuda=gpu_id model=car.pkl input=demo ./scripts/reconstruct.sh

where:

  • gpu_id is a target cuda device id,
  • car.pkl corresponds to a pretrained model,
  • demo is a folder containing the target images.

It will create a folder demo_rec containing the reconstructed meshes (.obj format + gif visualizations).

2. Reproduce our results 📊

shapenet.gif

To launch a training from scratch, run:

cuda=gpu_id config=filename.yml tag=run_tag ./scripts/pipeline.sh

where:

  • gpu_id is a target cuda device id,
  • filename.yml is a YAML config located in configs folder,
  • run_tag is a tag for the experiment.

Results are saved at runs/${DATASET}/${DATE}_${run_tag} where DATASET is the dataset name specified in filename.yml and DATE is the current date in mmdd format. Some training visual results like reconstruction examples will be saved. Available configs are:

  • sn/*.yml for each ShapeNet category
  • car.yml for CompCars dataset
  • cub.yml for CUB-200 dataset
  • horse.yml for LSUN Horse dataset
  • moto.yml for LSUN Motorbike dataset
  • p3d_car.yml for Pascal3D+ Car dataset

3. Train on a custom dataset 🔮

If you want to learn a model for a custom object category, here are the key things you need to do:

  1. put your images in a custom_name folder inside the datasets folder
  2. write a config custom.yml with custom_name as dataset.name and move it to the configs folder: as a rule of thumb for the progressive conditioning milestones, put the number of epochs corresponding to 500k iterations for each stage
  3. launch training with:
cuda=gpu_id config=custom.yml tag=custom_run_tag ./scripts/pipeline.sh

Further information 📚

If you like this project, check out related works from our group:

CVPR2021 Workshop - HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization.

HDRUNet [Paper Link] HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization By Xiangyu Chen, Yihao Liu, Zhengwen Zhang, Yu Qiao an

XyChen 105 Dec 20, 2022
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 08, 2023
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 08, 2023
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral]

Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral] Learning to Disambiguate Strongly In

Zicong Fan 40 Dec 22, 2022
A cross-lingual COVID-19 fake news dataset

CrossFake An English-Chinese COVID-19 fake&real news dataset from the ICDMW 2021 paper below: Cross-lingual COVID-19 Fake News Detection. Jiangshu Du,

Yingtong Dou 11 Dec 01, 2022
NeRViS: Neural Re-rendering for Full-frame Video Stabilization

Neural Re-rendering for Full-frame Video Stabilization

Yu-Lun Liu 9 Jun 17, 2022
A Pose Estimator for Dense Reconstruction with the Structured Light Illumination Sensor

Phase-SLAM A Pose Estimator for Dense Reconstruction with the Structured Light Illumination Sensor This open source is written by MATLAB Run Mode Open

Xi Zheng 14 Dec 19, 2022
Tightness-aware Evaluation Protocol for Scene Text Detection

TIoU-metric Release on 27/03/2019. This repository is built on the ICDAR 2015 evaluation code. If you propose a better metric and require further eval

Yuliang Liu 206 Nov 18, 2022
A visualisation tool for Deep Reinforcement Learning

DRLVIS - Visualising Deep Reinforcement Learning Created by Marios Sirtmatsis with the support of Alex Bäuerle. DRLVis is an application used for visu

Marios Sirtmatsis 1 Nov 04, 2021
Bayesian Optimization Library for Medical Image Segmentation.

bayesmedaug: Bayesian Optimization Library for Medical Image Segmentation. bayesmedaug optimizes your data augmentation hyperparameters for medical im

Şafak Bilici 7 Feb 10, 2022
EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling This is the official implementation for "Frustratingly Simple Pretraining Al

Atsuki Yamaguchi 31 Nov 18, 2022
RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

RaftMLP RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality? By Yuki Tatsunami and Masato Taki (Rikkyo University) [arxiv]

Okojo 20 Aug 31, 2022
Python binding for Khiva library.

Khiva-Python Build Documentation Build Linux and Mac OS Build Windows Code Coverage README This is the Khiva Python binding, it allows the usage of Kh

Shapelets 46 Oct 16, 2022
Adversarial Graph Representation Adaptation for Cross-Domain Facial Expression Recognition (AGRA, ACM 2020, Oral)

Cross Domain Facial Expression Recognition Benchmark Implementation of papers: Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchm

89 Dec 09, 2022
Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

This repo is the official implementation of "Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework". @inproceedings{zhou2021insta

34 Dec 31, 2022
Details about the wide minima density hypothesis and metrics to compute width of a minima

wide-minima-density-hypothesis Details about the wide minima density hypothesis and metrics to compute width of a minima This repo presents the wide m

Nikhil Iyer 9 Dec 27, 2022
This is a clean and robust Pytorch implementation of DQN and Double DQN.

DQN/DDQN-Pytorch This is a clean and robust Pytorch implementation of DQN and Double DQN. Here is the training curve: All the experiments are trained

XinJingHao 15 Dec 27, 2022
[AAAI 2021] EMLight: Lighting Estimation via Spherical Distribution Approximation and [ICCV 2021] Sparse Needlets for Lighting Estimation with Spherical Transport Loss

EMLight: Lighting Estimation via Spherical Distribution Approximation (AAAI 2021) Update 12/2021: We release our Virtual Object Relighting (VOR) Datas

Fangneng Zhan 144 Jan 06, 2023
SwinTrack: A Simple and Strong Baseline for Transformer Tracking

SwinTrack This is the official repo for SwinTrack. A Simple and Strong Baseline Prerequisites Environment conda (recommended) conda create -y -n SwinT

LitingLin 196 Jan 04, 2023
Evaluation framework for testing segmentation networks in PyTorch

Evaluation framework for testing segmentation networks in PyTorch. What segmentation network to choose for next Kaggle competition? This benchmark knows the answer!

Eugene Khvedchenya 37 Apr 27, 2022