Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Overview

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks

Example 1 Example 2 Example 3

This repository contains the code that accompanies our CVPR 2021 paper Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks

You can find detailed usage instructions for training your own models and using our pretrained models below.

If you found this work influential or helpful for your research, please consider citing

@Inproceedings{Paschalidou2021CVPR,
     title = {Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks},
     author = {Paschalidou, Despoina and Katharopoulos, Angelos and Geiger, Andreas and Fidler, Sanja},
     booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
     year = {2021}
}

Installation & Dependencies

Our codebase has the following dependencies:

For the visualizations, we use simple-3dviz, which is our easy-to-use library for visualizing 3D data using Python and ModernGL and matplotlib for the colormaps. Note that simple-3dviz provides a lightweight and easy-to-use scene viewer using wxpython. If you wish you use our scripts for visualizing the reconstructed primitives, you will need to also install wxpython.

The simplest way to make sure that you have all dependencies in place is to use conda. You can create a conda environment called neural_parts using

conda env create -f environment.yaml
conda activate neural_parts

Next compile the extenstion modules. You can do this via

python setup.py build_ext --inplace
pip install -e .

Demo

Example Output Example Output

You can now test our code on various inputs. To this end, simply download some input samples together with our pretrained models on D-FAUAST humans, ShapeNet chairs and ShapeNet planes from here. Now extract the nerual_parts_demo.zip that you just downloaded in the demo folder. To run our demo on the D-FAUST humans simply run

python demo.py ../config/dfaust_6.yaml --we ../demo/model_dfaust_6 --model_tag 50027_jumping_jacks:00135 --camera_target='-0.030173788,-0.10342446,-0.0021887198' --camera_position='0.076685235,-0.14528269,1.2060229' --up='0,1,0' --with_rotating_camera

This script should create a folder demo/output, where the per-primitive meshes are stored as .obj files. Similarly, you can now also run the demo for the input airplane

python demo.py ../config/shapenet_5.yaml --we ../demo/model_planes_5 --model_tag 02691156:7b134f6573e7270fb0a79e28606cb167 --camera_target='-0.030173788,-0.10342446,-0.0021887198' --camera_position='0.076685235,-0.14528269,1.2060229' --up='0,1,0' --with_rotating_camera

Usage

As soon as you have installed all dependencies and have obtained the preprocessed data, you can now start training new models from scratch, evaluate our pre-trained models and visualize the recovered primitives using one of our pre-trained models.

Reconstruction

To generate meshes using a trained model, we provide the forward_pass.py and the visualize_predictions.py scripts. Their difference is that the first performs the forward pass and generates a per-primitive mesh that is saved as an .obj file. Similarly, the visualize_predictions.py script performs the forward pass and visualizes the predicted primitives using simple-3dviz. The forward_pass.py script is ideal for reconstructing inputs on a heeadless server and you can run it by executing

python forward_pass.py path_to_config_yaml path_to_output_dir --weight_file path_to_weight_file --model_tag MODEL_TAG

where the argument --weight_file specifies the path to a trained model and the argument --model_tag defines the model_tag of the input to be reconstructed.

To run the visualize_predictions.py script you need to run

python visualize_predictions.py path_to_config_yaml path_to_output_dir --weight_file path_to_weight_file --model_tag MODEL_TAG

Using this script, you can easily render the prediction into .png images or a .gif, as well as perform various animations by rotating the camera. Furthermore, you can also specify the camera position, the up vector and the camera target as well as visualize the target mesh together with the predicted primitives simply by adding the --mesh argument.

Evaluation

For evaluation of the models we provide the script evaluate.py. You can run it using:

python evaluate.py path_to_config_yaml path_to_output_dir

The script reconstructs the input and evaluates the generated meshes using a standardized protocol. For each input, the script generates a .npz file that contains the various metrics for that particular input. Note that this script can also be executed multiple times in order to speed up the evaluation process. For example, if you wish to run the evaluation on 6 nodes, you can simply run

for i in {1..6}; do python evaluate.py path_to_config_yaml path_to_output_dir & done
[1] 9489
[2] 9490
[3] 9491
[4] 9492
[5] 9493
[6] 9494

wait
Running code on cpu
Running code on cpu
Running code on cpu
Running code on cpu
Running code on cpu
Running code on cpu

Again the script generates a per-input file in the output directory with the computed metrics.

Training

Finally, to train a new network from scratch, we provide the train_network.py script. To execute this script, you need to specify the path to the configuration file you wish to use and the path to the output directory, where the trained models and the training statistics will be saved. Namely, to train a new model from scratch, you simply need to run

python train_network.py path_to_config_yaml path_to_output_dir

Note tha it is also possible to start from a previously trained model by specifying the --weight_file argument, which should contain the path to a previously trained model. Furthermore, by using the arguments --model_tag and --category_tag, you can also train your network on a particular model (e.g. a specific plane, car, human etc.) or a specific object category (e.g. planes, chairs etc.)

Note that, if you want to use the RAdam optimizer during training, you will have to also install to download and install the corresponding code from this repository.

License

Our code is released under the MIT license which practically allows anyone to do anything with it. MIT license found in the LICENSE file.

Relevant Research

Below we list some papers that are relevant to our work.

Ours:

  • Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image pdf,project-page
  • Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids pdf,project-page

By Others:

  • Learning Shape Abstractions by Assembling Volumetric Primitives pdf
  • 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks pdf
  • Im2Struct: Recovering 3D Shape Structure From a Single RGB Image pdf
  • Learning shape templates with structured implicit functions pdf
  • CvxNet: Learnable Convex Decomposition pdf
Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Yaoming Cai 5 Jul 18, 2022
This repository contains answers of the Shopify Summer 2022 Data Science Intern Challenge.

Data-Science-Intern-Challenge This repository contains answers of the Shopify Summer 2022 Data Science Intern Challenge. Summer 2022 Data Science Inte

1 Jan 11, 2022
STARCH compuets regional extreme storm physical characteristics and moisture balance based on spatiotemporal precipitation data from reanalysis or climate model data.

STARCH (Storm Tracking And Regional CHaracterization) STARCH computes regional extreme storm physical and moisture balance characteristics based on sp

Onosama 7 Oct 20, 2022
Clustergram - Visualization and diagnostics for cluster analysis in Python

Clustergram Visualization and diagnostics for cluster analysis Clustergram is a diagram proposed by Matthias Schonlau in his paper The clustergram: A

Martin Fleischmann 96 Dec 26, 2022
Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method

C++/ROS Source Codes for "Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method" published in IEEE Trans. Intelligent Transportation Systems

Bai Li 88 Dec 23, 2022
An open source implementation of CLIP.

OpenCLIP Welcome to an open source implementation of OpenAI's CLIP (Contrastive Language-Image Pre-training). The goal of this repository is to enable

2.7k Dec 31, 2022
TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers.

TransMVSNet This repository contains the official implementation of the paper: "TransMVSNet: Global Context-aware Multi-view Stereo Network with Trans

旷视研究院 3D 组 155 Dec 29, 2022
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
abess: Fast Best-Subset Selection in Python and R

abess: Fast Best-Subset Selection in Python and R Overview abess (Adaptive BEst Subset Selection) library aims to solve general best subset selection,

297 Dec 21, 2022
BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

BABEL is a large dataset with language labels describing the actions being performed in mocap sequences. BABEL labels about 43 hours of mocap sequences from AMASS [1] with action labels.

113 Dec 28, 2022
Rax is a Learning-to-Rank library written in JAX

🦖 Rax: Composable Learning to Rank using JAX Rax is a Learning-to-Rank library written in JAX. Rax provides off-the-shelf implementations of ranking

Google 247 Dec 27, 2022
A benchmark dataset for emulating atmospheric radiative transfer in weather and climate models with machine learning (NeurIPS 2021 Datasets and Benchmarks Track)

ClimART - A Benchmark Dataset for Emulating Atmospheric Radiative Transfer in Weather and Climate Models Official PyTorch Implementation Using deep le

21 Dec 31, 2022
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)

Vision-Language Transformer and Query Generation for Referring Segmentation Please consider citing our paper in your publications if the project helps

Henghui Ding 143 Dec 23, 2022
It is an open dataset for object detection in remote sensing images.

RSOD-Dataset It is an open dataset for object detection in remote sensing images. The dataset includes aircraft, oiltank, playground and overpass. The

136 Dec 08, 2022
OcclusionFusion: realtime dynamic 3D reconstruction based on single-view RGB-D

OcclusionFusion (CVPR'2022) Project Page | Paper | Video Overview This repository contains the code for the CVPR 2022 paper OcclusionFusion, where we

Wenbin Lin 193 Dec 15, 2022
Code and data for the paper "Hearing What You Cannot See"

Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners Public repository of the paper "Hearing What You Cannot See: Acoustic Vehicle D

TU Delft Intelligent Vehicles 26 Jul 13, 2022
Based on the given clinical dataset, Predict whether the patient having Heart Disease or Not having Heart Disease

Heart_Disease_Classification Based on the given clinical dataset, Predict whether the patient having Heart Disease or Not having Heart Disease Dataset

Ashish 1 Jan 30, 2022
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
Prometheus exporter for Cisco Unified Computing System (UCS) Manager

prometheus-ucs-exporter Overview Use metrics from the UCS API to export relevant metrics to Prometheus This repository is a fork of Drew Stinnett's or

Marshall Wace 6 Nov 07, 2022
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022