This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Overview

Occupancy Flow

This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics.

You can find detailed usage instructions for training your own models and using pre-trained models below.

If you find our code or paper useful, please consider citing

@inproceedings{OccupancyFlow,
    title = {Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics},
    author = {Niemeyer, Michael and Mescheder, Lars and Oechsle, Michael and Geiger, Andreas},
    booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
    year = {2019}
}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create and activate an anaconda environment called oflow using

conda env create -f environment.yaml
conda activate oflow

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Demo

You can test our code on the provided input point cloud sequences in the demo/ folder. To this end, simple run

python generate.py configs/demo.yaml

This script should create a folder out/demo/ where the output is stored.

Dataset

Point-based Data

To train a new model from scratch, you have to download the full dataset. You can download the pre-processed data (~42 GB) using

bash scripts/download_data.sh

The script will download the point-based point-based data for the Dynamic FAUST (D-FAUST) dataset to the data/ folder.

Please note: We do not provide the renderings for the 4D reconstruction from image sequences experiment nor the meshes for the interpolation and generative tasks due to privacy regulations. We outline how you can download the mesh data in the following.

Mesh Data

Please follow the instructions on D-FAUST homepage to download the "female and male registrations" as well as "scripts to load / parse the data". Next, follow their instructions in the scripts/README.txt file to extract the obj-files of the sequences. Once completed, you should have a folder with the following structure:


your_dfaust_folder/
| 50002_chicken_wings/
    | 00000.obj
    | 00001.obj
    | ...
    | 000215.obj
| 50002_hips/
    | 00000.obj
    | ...
| ...
| 50027_shake_shoulders/
    | 00000.obj
    | ...


You can now run

bash scripts/migrate_dfaust.sh path/to/your_dfaust_folder

to copy the mesh data to the dataset folder. The argument has to be the folder to which you have extracted the mesh data (the your_dfaust_folder from the directory tree above).

Usage

When you have installed all dependencies and obtained the preprocessed data, you are ready to run our pre-trained models and train new models from scratch.

Generation

To start the normal mesh generation process using a trained model, use

python generate.py configs/CONFIG.yaml

where you replace CONFIG.yaml with the name of the configuration file you want to use.

The easiest way is to use a pretrained model. You can do this by using one of the config files

configs/pointcloud/oflow_w_correspond_pretrained.yaml
configs/interpolation/oflow_pretrained.yaml
configs/generative/oflow_pretrained.yaml

Our script will automatically download the model checkpoints and run the generation. You can find the outputs in the out/ folder.

Please note that the config files *_pretrained.yaml are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

Generation - Generative Tasks

For model-specific latent space interpolations and motion transfers, you first have to run

python encode_latent_motion_space.py config/generative/CONFIG.yaml

Next, you can call

python generate_latent_space_interpolation.py config/generative/CONFIG.yaml

or

python generate_motion_transfer.py config/generative/CONFIG.yaml

Please note: Make sure that you use the appropriate model for the generation processes, e.g. the latent space interpolations and motion transfers can only be generated with a generative model (e.g. configs/generative/oflow_pretrained.yaml).

Evaluation

You can evaluate the generated output of a model on the test set using

python eval.py configs/CONFIG.yaml

The evaluation results will be saved to pickle and csv files.

Training

Finally, to train a new network from scratch, run

python train.py configs/CONFIG.yaml

You can monitor the training process on http://localhost:6006 using tensorboard:

cd OUTPUT_DIR
tensorboard --logdir ./logs --port 6006

where you replace OUTPUT_DIR with the respective output directory. For available training options, please have a look at config/default.yaml.

Further Information

Implicit Representations

If you like the Occupancy Flow project, please check out our similar projects on inferring 3D shapes (Occupancy Networks) and texture (Texture Fields).

Neural Ordinary Differential Equations

If you enjoyed our approach using differential equations, checkout Ricky Chen et. al.'s awesome implementation of differentiable ODE solvers which we used in our project.

Dynamic FAUST Dataset

We applied our method to the cool Dynamic FAUST dataset which contains sequences of real humans performing various actions.

This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
Code for LIGA-Stereo Detector, ICCV'21

LIGA-Stereo Introduction This is the official implementation of the paper LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based

Xiaoyang Guo 75 Dec 09, 2022
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
Calibrated Hyperspectral Image Reconstruction via Graph-based Self-Tuning Network.

mask-uncertainty-in-HSI This repository contains the testing code and pre-trained models for the paper Calibrated Hyperspectral Image Reconstruction v

JIAMIAN WANG 9 Dec 29, 2022
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Official Paddle Implementation] [Huggingface Gradio Demo] [Unofficial

442 Dec 16, 2022
3 Apr 20, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

75 Nov 24, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 02, 2023
PyTorch Implementation of NCSOFT's FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis

FastPitchFormant - PyTorch Implementation PyTorch Implementation of FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis. Qu

Keon Lee 63 Jan 02, 2023
A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

Emma 1 Jan 18, 2022
HyperaPy: An automatic hyperparameter optimization framework ⚡🚀

hyperpy HyperPy: An automatic hyperparameter optimization framework Description HyperPy: Library for automatic hyperparameter optimization. Build on t

Sergio Mora 7 Sep 06, 2022
GraPE is a Rust/Python library for high-performance Graph Processing and Embedding.

GraPE GraPE (Graph Processing and Embedding) is a fast graph processing and embedding library, designed to scale with big graphs and to run on both of

AnacletoLab 194 Dec 29, 2022
AquaTimer - Programmable Timer for Aquariums based on ATtiny414/814/1614

AquaTimer - Programmable Timer for Aquariums based on ATtiny414/814/1614 AquaTimer is a programmable timer for 12V devices such as lighting, solenoid

Stefan Wagner 4 Jun 13, 2022
Code for DeepCurrents: Learning Implicit Representations of Shapes with Boundaries

DeepCurrents | Webpage | Paper DeepCurrents: Learning Implicit Representations of Shapes with Boundaries David Palmer*, Dmitriy Smirnov*, Stephanie Wa

Dima Smirnov 36 Dec 08, 2022
This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods

pyLiDAR-SLAM This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods, which can easily be evaluated

Kitware, Inc. 208 Dec 16, 2022
Official implementation of ACMMM'20 paper 'Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework'

Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework Official code for paper, Self-supervised Video Representation Le

Li Tao 103 Dec 21, 2022
Collection of NLP model explanations and accompanying analysis tools

Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi

126 Nov 22, 2022
Bringing sanity to world of messed-up data

Sanitize sanitize is a Python module for making sure various things (e.g. HTML) are safe to use. It was originally written by Mark Pilgrim and is dist

Alireza Savand 63 Oct 26, 2021
Official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

This is the official code repository for A Simple Long-Tailed Rocognition Baseline via Vision-Language Model.

peng gao 42 Nov 26, 2022
Official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.

GLIDE This is the official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing w

OpenAI 2.9k Jan 04, 2023