An implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

Overview

Deep Permutation Equivariant Structure from Motion

Paper | Poster

This repository contains an implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

The paper proposes a neural network architecture that, given a set of point tracks in multiple images of a static scene, recovers both the camera parameters and a (sparse) scene structure by minimizing an unsupervised reprojection loss. The method does not require initialization of camera parameters or 3D point locations and is implemented for two setups: (1) single scene reconstruction and (2) learning from multiple scenes.

Table of Contents


Setup

This repository is implemented with python 3.8, and in order to run bundle adjustment requires linux.

Folders

The repository should contain the following folders:

Equivariant-SFM
├── bundle_adjustment
├── code
├── datasets
│   ├── Euclidean
│   └── Projective
├── environment.yml
├── results

Conda envorinment

Create the environment using one of the following commands:

conda create -n ESFM -c pytorch -c conda-forge -c comet_ml -c plotly  -c fvcore -c iopath -c bottler -c anaconda -c pytorch3d python=3.8 pytorch cudatoolkit=10.2 torchvision pyhocon comet_ml plotly pandas opencv openpyxl xlrd cvxpy fvcore iopath nvidiacub pytorch3d eigen cmake glog gflags suitesparse gxx_linux-64 gcc_linux-64 dask matplotlib
conda activate ESFM

Or:

conda env create -f environment.yml
conda activate ESFM

And follow the bundle adjustment instructions.

Data

Download the data from this link.

The model can work on both calibrated camera setting (euclidean reconstruction) and on uncalibrated cameras (projective reconstruction).

The input for the model is an observed points matrix of size [m,n,2] where the entry [i,j] is a 2D image point that corresponds to camera (image) number i and 3D point (point track) number j.

In practice we use a correspondence matrix representation of size [2*m,n], where the entries [2*i,j] and [2*i+1,j] form the [i,j] image point.

For the calibrated setting, the input must include m calibration matrices of size [3,3].

How to use

Optimization

For a calibrated scene optimization run:

python single_scene_optimization.py --conf Optimization_Euc.conf

For an uncalibrated scene optimization run:

python single_scene_optimization.py --conf Optimization_Proj.conf

The following examples are for the calibrated settings but are clearly the same for the uncalibrated setting.

You can choose which scene to optimize either by changing the config file in the field 'dataset.scan' or from the command line:

python single_scene_optimization.py --conf Optimization_Euc.conf --scan [scan_name]

Similarly, you can override any value of the config file from the command line. For example, to change the number of training epochs and the evaluation frequency use:

python single_scene_optimization.py --conf Optimization_Euc.conf --external_params "train:num_of_epochs:1e+5,train:eval_intervals:100"

Learning

To run the learning setup run:

python multiple_scenes_learning.py --conf Learning_Euc.conf

Or for the uncalibrated setting:

python multiple_scenes_learning.py --conf Learning_Proj.conf

To override some parameters from the config file, you can either change the file itself or use the same command as in the optimization setting:

python multiple_scenes_learning.py --conf Learning_Euc.conf --external_params "train:num_of_epochs:1e+5,train:eval_intervals:100"

Citation

If you find this work useful please cite:

@InProceedings{Moran_2021_ICCV,
    author    = {Moran, Dror and Koslowsky, Hodaya and Kasten, Yoni and Maron, Haggai and Galun, Meirav and Basri, Ronen},
    title     = {Deep Permutation Equivariant Structure From Motion},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {5976-5986}
}
Charsiu: A transformer-based phonetic aligner

Charsiu: A transformer-based phonetic aligner [arXiv] Note. This is a preview version. The aligner is under active development. New functions, new lan

jzhu 166 Dec 09, 2022
Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

Official PyTorch Implementation of paper EAN: Event Adaptive Network for Efficient Action Recognition

TianYuan 27 Nov 07, 2022
This is an official source code for implementation on Extensive Deep Temporal Point Process

Extensive Deep Temporal Point Process This is an official source code for implementation on Extensive Deep Temporal Point Process, which is composed o

Haitao Lin 8 Aug 15, 2022
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Rishikesh (ऋषिकेश) 31 Dec 08, 2022
Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training.

Updates (2020/06/21) Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training. Pyr

1.3k Jan 04, 2023
To SMOTE, or not to SMOTE?

To SMOTE, or not to SMOTE? This package includes the code required to repeat the experiments in the paper and to analyze the results. To SMOTE, or not

Amazon Web Services 1 Jan 03, 2022
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022
Implementation for the paper SMPLicit: Topology-aware Generative Model for Clothed People (CVPR 2021)

SMPLicit: Topology-aware Generative Model for Clothed People [Project] [arXiv] License Software Copyright License for non-commercial scientific resear

Enric Corona 225 Dec 13, 2022
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks (SDPoint) This repository contains the cod

Jason Kuen 17 Jul 04, 2022
Julia and Matlab codes to simulated all problems in El-Hachem, McCue and Simpson (2021)

Substrate_Mediated_Invasion Julia and Matlab codes to simulated all problems in El-Hachem, McCue and Simpson (2021) 2DSolver.jl reproduces the simulat

Matthew Simpson 0 Nov 09, 2021
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Onur Çopur 12 Nov 11, 2022
OoD Minimum Anomaly Score GAN - Code for the Paper 'OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample Generation on the Boundary'

OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample Generation on the Boundary Out-of-Distribution Minimum Anomaly Score GAN (OMASGAN) C

- 8 Sep 27, 2022
NeurIPS 2021, self-supervised 6D pose on category level

SE(3)-eSCOPE video | paper | website Leveraging SE(3) Equivariance for Self-Supervised Category-Level Object Pose Estimation Xiaolong Li, Yijia Weng,

Xiaolong 63 Nov 22, 2022
Speech Recognition using DeepSpeech2.

deepspeech.pytorch Implementation of DeepSpeech2 for PyTorch using PyTorch Lightning. The repo supports training/testing and inference using the DeepS

Sean Naren 2k Jan 04, 2023
[NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts

Unsupervised Learning of Compositional Energy Concepts This is the pytorch code for the paper Unsupervised Learning of Compositional Energy Concepts.

45 Nov 30, 2022
Automatically erase objects in the video, such as logo, text, etc.

Video-Auto-Wipe Read English Introduction:Here   本人不定期的基于生成技术制作一些好玩有趣的算法模型,这次带来的作品是“视频擦除”方向的应用模型,它实现的功能是自动感知到视频中我们不想看见的部分(譬如广告、水印、字幕、图标等等)然后进行擦除。由于图标擦

seeprettyface.com 141 Dec 26, 2022
AI-Fitness-Tracker - AI Fitness Tracker With Python

AI-Fitness-Tracker We have build a AI based Fitness Tracker using OpenCV and Pyt

Sharvari Mangale 5 Feb 09, 2022
A Streamlit demo demonstrating the Deep Dream technique. Adapted from the TensorFlow Deep Dream tutorial.

Streamlit Demo: Deep Dream A Streamlit demo demonstrating the Deep Dream technique. Adapted from the TensorFlow Deep Dream tutorial How to run this de

Streamlit 11 Dec 12, 2022
Face recognition project by matching the features extracted using SIFT.

MV_FaceDetectionWithSIFT Face recognition project by matching the features extracted using SIFT. By : Aria Radmehr Professor : Ali Amiri Dependencies

Aria Radmehr 4 May 31, 2022
👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022