Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

Overview

NPMs: Neural Parametric Models

Project Page | Paper | ArXiv | Video


NPMs: Neural Parametric Models for 3D Deformable Shapes
Pablo Palafox, Aljaz Bozic, Justus Thies, Matthias Niessner, Angela Dai

Citation

@article{palafox2021npms
    author        = {Palafox, Pablo and Bo{\v{z}}i{\v{c}}, Alja{\v{z}} and Thies, Justus and Nie{\ss}ner, Matthias and Dai, Angela},
    title         = {NPMs: Neural Parametric Models for 3D Deformable Shapes},
    journal       = {arXiv preprint arXiv:2104.00702},
    year          = {2021},
}

Install

You can either pull our docker image, build it yourself with the provided Dockerfile or build the project from source.

Pull Docker Image

docker pull ppalafox/npms:latest

You can now run an interactive container of the image you just built (before that, navigate to npms):

cd npms
docker run --ipc=host -it --name npms --gpus=all -v $PWD:/app -v /cluster:/cluster npms:latest bash

Build Docker Image

Run the following from within the root of this project (where Dockerfile lives) to build a docker image with all required dependencies.

docker build . -t npms

You can now run an interactive container of the image you just built (before that, navigate to npms):

cd npms
docker run --ipc=host -it --name npms --gpus=all -v $PWD:/app -v /cluster:/cluster npms:latest bash

Of course, you'll have to specify you're own paths to the volumes you'd like to mount using the -v flag.

Build from source

A linux system with cuda is required for the project.

The npms_env.yml file contains (hopefully) all necessary python dependencies for the project. To conveniently install them automatically with anaconda you can use:

conda env create -f npms_env.yml
conda activate npms
Other dependencies

We need some other dependencies. Starting from the root folder of this project, we'll do the following...

  • Compile the csrc folder:
cd csrc 
python setup.py install
cd ..
  • We need some libraries from IFNet. In particular, we need libmesh and libvoxelize from that repo. They are already placed within external. (Check the corresponding LICENSE). To build these, proceed as follows:
cd libmesh/
python setup.py build_ext --inplace
cd ../libvoxelize/
python setup.py build_ext --inplace
cd ..
chmod +x build_gaps.sh
./build_gaps.sh

       You can make sure it's built properly by running:

chmod +x gaps_is_installed.sh
./gaps_is_installed.sh

       You should get a "Ready to go!" as output.

You can now navigate back to the root folder: cd ..

Data Preparation

As an example, let's have a quick overview of what the process would look like in order to generate training data from the CAPE dataset.

Download their dataset, by registering and accepting their terms. Once you've followed their steps to download the dataset, you should have a folder named cape_release.

In npms/configs_train/config_train_HUMAN.py, set the variable ROOT to point to the folder where you want your data to live in. Then:

cd <ROOT>
mkdir data

And place cape_release within data.

Download SMPL models

Register here to get access to SMPL body models. Then, under the downloads tab, download the models. Refer to https://github.com/vchoutas/smplx#model-loading for more details.

From within the root folder of this project, run:

cd npms/body_model
mkdir smpl

And place the .pkl files you just downloaded under npms/body_model/smpl. Now change their names, such that you have something like:

body_models
│── smpl
│  │── smpl
│  │  └── SMPL_FEMALE.pkl
│  │  └── SMPL_MALE.pkl
│  │  └── SMPL_NEUTRAL.pkl

Preprocess the raw CAPE

Now let's process the raw data in order to generate training samples for our NPM.

cd npms/data_processing
python prepare_cape_data.py

Then, we normalize the preprocessed dataset, such that the meshes reside within a bounding box with boundaries bbox_min=-0.5 and bbox_max=0.5.

# We're within npms/data_processing
python normalize_dataset.py

At this point, we can generate training samples for both the shape and the pose MLP. An extra step would be required if our t-poses (<ROOT>/datasets/cape/a_t_pose/000000/mesh_normalized.ply) were not watertight. We'd need to run multiview_to_watertight_mesh.py. Since CAPE is already watertight, we don't need to worry about this.

About labels.json and labels_tpose.json

One last thing before actually generating the samples is to create some "labels" files that specify the paths to the dataset we wanna create. Under the folder ZSPLITS_HUMAN we have copied some examples.

Within it, you can find other folders containing datasets in the form of the paths to the actual data. For example, CAPE-SHAPE-TRAIN-35id, which in turn contains two files: labels_tpose and labels. They define datasets in a flexible way, by means of a list of dictionaries, where each dictionary holds the paths to a particular sample. You'll get a feeling of why we have a labels.json and labels_tpose.json by running the following sections to generate data, as well as when you dive into actually training a new NPM from scratch.

Go ahead and copy the folder ZSPLITS_HUMAN into <ROOT>/datasets, where ROOT is a path to your datasets that you can specify in npms/configs_train/config_train_HUMAN.py. If you followed along until now, within <ROOT>/datasets you should already have the preprocessed <ROOT>/datasets/cape dataset.

# Assuming you're in the root folder of the project
cp -r ZSPLITS_HUMAN <ROOT>/datasets

Note: within data_scripts you can find helpful scripts to generate your own labels.json and labels_tpose.json from a dataset. Check out the npms/data_scripts/README.md for a brief overview on these scripts.

SDF samples

Generate SDF samples around our identities in their t-pose in order to train the shape latent space.

# We're within npms/data_processing
python sample_boundary_sdf_gaps.py
Flow samples

Generate correspondences from an identity in its t-pose to its posed instances.

# We're within npms/data_processing
python sample_flow.py -sigma 0.01
python sample_flow.py -sigma 0.002

We're done with generating data for CAPE! This was just an example using CAPE, but as you've seen, the only thing you need to have is a dataset of meshes:

  • we need t-pose meshes for each identity in the dataset, and we can use multiview_to_watertight_mesh.py to make these t-pose meshes watertight, to then sample points and their SDF values.
  • for a given identity, we need to have surface correspondences between the t-pose and the posed meshes (but note that these posed meshes don't need to be watertight).

Training an NPM

Shape Latent Space

Set only_shape=True in config_train_HUMAN.py. Then, from within the npms folder, start the training:

python train.py

Pose Latent Space

Set only_shape=False in config_train_HUMAN.py. We now need to load the best checkpoint from training the shape MLP. For that, go to config_train_HUMAN.py, make sure init_from = True in its first appearance in the file, and then set this same variable to your pretrained model name later in the file:

init_from = "<model_name>"
checkpoint = <the_epoch_number_you_want_to_load>

Then, from within the npms folder, start the training:

python train.py

Once we reach convergence, you're done. You know have latent spaces of shape and pose that you can play with.

You could:

Fitting an NPM to a Monocular Depth Sequence

Code Initialization

When fitting an NPM to monocular depth sequence, it is recommended that we have a relatively good initialization of our shape and pose codes to avoid falling into local minima. To this end, we are gonna learn a shape and a pose encoder that map an input depth map to a shape and pose code, respectively.

We basically use the shape and pose codes that we've learned during training time as targets for training the shape and pose encoders. You can use prepare_labels_shape_encoder.py and prepare_labels_pose_encoder.py to generate the dataset labels for this encoder training.

You basically have to train them like so:

python encode_shape_codes.py
python encode_pose_codes.py

And regarding the data you need for training the encoder...

Data preparation: Take a look at the scripts voxelize_multiview.py to prepare the single-view voxel grids that we require to train our encoders.

Test-time Optimization

Now you can fit NPMs to an input monocular depth sequence:

python fit_npm.py -o -d HUMAN -e <EXTRA_NAME_IF_YOU_WANT>

The -o flag for optimize; the -d flag for the kind of dataset (HUMAN, MANO) and the -e flag for appending a string to the name of the current optimization run.

You'll have to take a look at config_eval_HUMAN.py and set the name of your trained model (exp_model) and its hyperparameters, as well as the dataset name dataset_name you want to evaluate on.

It's definitely not the cleanest and easiest config file, sorry for that!

Data preparation: Take a look at the scripts compute_partial_sdf_grid.py to prepare the single-view SDF grid that we assume as input at test-time.

Visualization

With the following script you can visualize your fitting. Have a look at config_viz_OURS.py and set the name of your trained model (exp_model) as well as the name of your optimization run (run_name) of test-time fitting you just computed.

python viz_all_methods.py -m NPM -d HUMAN

There are a bunch of other scripts for visualization. They're definitely not cleaned-up, but I kept them here anyways in case they might be useful for you as a starting point.

Compute metrics

python compute_errors.py -n <name_of_optimization_run>

Latent-space Interpolation

Check out the files:

Shape and Pose Transfer

Check out the files:

Pretrained Models

Download pre-trained models here

License

NPMs is relased under the MIT License. See the LICENSE file for more details.

Check the corresponding LICENSES of the projects under the external folder.

For instance, we make use of libmesh and libvoxelize, which come from IFNets. Please check their LICENSE.

We need some helper functions from LDIF. Namely, base_util.py and file_util.py, which should be already under utils. Check the license and copyright in those files.

Owner
PabloPalafox
PhD Student @ TU Munich w/ Angela Dai
PabloPalafox
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

81 Dec 28, 2022
UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down. UpChecker - just run file and use project easy

UpChecker UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down.

Yan 4 Apr 07, 2022
Auto White-Balance Correction for Mixed-Illuminant Scenes

Auto White-Balance Correction for Mixed-Illuminant Scenes Mahmoud Afifi, Marcus A. Brubaker, and Michael S. Brown York University Video Reference code

Mahmoud Afifi 47 Nov 26, 2022
Fake-user-agent-traffic-geneator - Python CLI Tool to generate fake traffic against URLs with configurable user-agents

Fake traffic generator for Gartner Demo Generate fake traffic to URLs with custo

New Relic Experimental 3 Oct 31, 2022
Code for NeurIPS 2021 paper 'Spatio-Temporal Variational Gaussian Processes'

Spatio-Temporal Variational GPs This repository is the official implementation of the methods in the publication: O. Hamelijnck, W.J. Wilkinson, N.A.

AaltoML 26 Sep 16, 2022
Pytorch domain adaptation package

DomainAdaptation This package is created to tackle the problem of domain shifts when dealing with two domains of different feature distributions. In d

Institute of Computational Perception 7 Oct 22, 2022
Detectron2 for Document Layout Analysis

Detectron2 trained on PubLayNet dataset This repo contains the training configurations, code and trained models trained on PubLayNet dataset using Det

Himanshu 163 Nov 21, 2022
This is the official code of our paper "Diversity-based Trajectory and Goal Selection with Hindsight Experience Relay" (PRICAI 2021)

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay This is the official implementation of our paper "Diversity-based Traje

Tianhong Dai 6 Jul 18, 2022
Code to reproduce the results for Statistically Robust Neural Network Classification, published in UAI 2021

Code to reproduce the results for Statistically Robust Neural Network Classification, published in UAI 2021

1 Jun 02, 2022
A torch implementation of "Pixel-Level Domain Transfer"

Pixel Level Domain Transfer A torch implementation of "Pixel-Level Domain Transfer". based on dcgan.torch. Dataset The dataset used is "LookBook", fro

Fei Xia 260 Sep 02, 2022
This is the official github repository of the Met dataset

The Met dataset This is the official github repository of the Met dataset. The official webpage of the dataset can be found here. What is it? This cod

Nikolaos-Antonios Ypsilantis 35 Dec 17, 2022
Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Shellphish 791 Jan 06, 2023
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
SOLO and SOLOv2 for instance segmentation, ECCV 2020 & NeurIPS 2020.

SOLO: Segmenting Objects by Locations This project hosts the code for implementing the SOLO algorithms for instance segmentation. SOLO: Segmenting Obj

Xinlong Wang 1.5k Dec 31, 2022
Solving SMPL/MANO parameters from keypoint coordinates.

Minimal-IK A simple and naive inverse kinematics solver for MANO hand model, SMPL body model, and SMPL-H body+hand model. Briefly, given joint coordin

Yuxiao Zhou 305 Dec 30, 2022
This is the code for the paper "Motion-Focused Contrastive Learning of Video Representations" (ICCV'21).

Motion-Focused Contrastive Learning of Video Representations Introduction This is the code for the paper "Motion-Focused Contrastive Learning of Video

11 Sep 23, 2022
PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

StyleSpeech - PyTorch Implementation PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation. Status (2021.06.13

Keon Lee 140 Dec 21, 2022
Exploring whether attention is necessary for vision transformers

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet Paper/Report TL;DR We replace the attention layer in a v

Luke Melas-Kyriazi 461 Jan 07, 2023
GAN-based 3D human pose estimation model for 3DV'17 paper

Tensorflow implementation for 3DV 2017 conference paper "Adversarially Parameterized Optimization for 3D Human Pose Estimation". @inproceedings{jack20

Dominic Jack 15 Feb 27, 2021
Using PyTorch Perform intent classification using three different models to see which one is better for this task

Using PyTorch Perform intent classification using three different models to see which one is better for this task

Yoel Graumann 1 Feb 14, 2022