MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

Overview

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

This repository contains the implementation of our paper MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images.

You can find detailed usage instructions for training your own models and using pretrained models below.

If you find our code useful, please cite:

@InProceedings{MetaAvatar:NeurIPS:2021,
  title = {MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images},
  author = {Shaofei Wang and Marko Mihajlovic and Qianli Ma and Andreas Geiger and Siyu Tang},
  booktitle = {Advances in Neural Information Processing Systems},
  year = {2021}
}

Installation

This repository has been tested on the following platform:

  1. Python 3.7, PyTorch 1.7.1 with CUDA 10.2 and cuDNN 7.6.5, Ubuntu 20.04

To clone the repo, run either:

git clone --recursive https://github.com/taconite/MetaAvatar-release.git

or

git clone https://github.com/taconite/MetaAvatar-release.git
git submodule update --init --recursive

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called meta-avatar using

conda env create -f environment.yml
conda activate meta-avatar

(Optional) if you want to use the evaluation code under evaluation/, then you need to install kaolin. Download the code from the kaolin repository, checkout to commit e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and install it according to the instructions.

Build the dataset

To prepare the dataset for training/fine-tuning/evaluation, you have to first download the CAPE dataset from the CAPE website.

  1. Download SMPL v1.0, clean-up the chumpy objects inside the models using this code, and rename the files and extract them to ./body_models/smpl/, eventually, the ./body_models folder should have the following structure:
    body_models
     └-- smpl
     	├-- male
     	|   └-- model.pkl
     	└-- female
     	    └-- model.pkl
    
    

(Optional) if you want to use the evaluation code under evaluation/, then you need to download all the .pkl files from IP-Net repository and put them under ./body_models/misc/.

Finally, run the following script to extract necessary SMPL parameters used in our code:

python extract_smpl_parameters.py

The extracted SMPL parameters will be save into ./body_models/misc/.

  1. Extract CAPE dataset to an arbitrary path, denoted as ${CAPE_ROOT}. The extracted dataset should have the following structure:
    ${CAPE_ROOT}
     ├-- 00032
     ├-- 00096
     |   ...
     ├-- 03394
     └-- cape_release
    
    
  2. Create data directory under the project directory.
  3. Modify the parameters in preprocess/build_dataset.sh accordingly (i.e. modify the --dataset_path to ${CAPE_ROOT}) to extract training/fine-tuning/evaluation data.
  4. Run preprocess/build_dataset.sh to preprocess the CAPE dataset.

(Optional) if you want evaluate performance on interpolation task, then you need to process CAPE data again in order to generate processed data at full framerate. Simply comment the first command and uncomment the second command in preprocess/build_dataset.sh and run the script.

Pre-trained models

We provide pre-trained models, including 1) forward/backward skinning networks for full pointcloud (stage 0) 2) forward/backward skinning networks for depth pointcloud (stage 0) 3) meta-learned static SDF (stage 1) 3) meta-learned hypernetwork (stage 2) . After downloading them, please put them in respective folders under ./out/metaavatar.

Fine-tuning fromt the pre-trained model

We provide script to fine-tune subject/cloth-type specific avatars in batch. Simply run:

bash run_fine_tuning.sh

And it will conduct fine-tuning with default setting (subject 00122 with shortlong). You can comment/uncomment/add lines in jobs/splits to modify data splits.

Training

To train new networks from scratch, run

python train.py --num-workers 8 configs/meta-avatar/${config}.yaml

You can train the two stage 0 models in parallel, while stage 1 model depends on stage 0 models and stage 2 model depends on stage 1 model.

You can monitor on http://localhost:6006 the training process using tensorboard:

tensorboard --logdir ${OUTPUT_DIR}/logs --port 6006

where you replace ${OUTPUT_DIR} with the respective output directory.

Evaluation

To evaluate the generated meshes, use the following script:

bash run_evaluation.sh

Again, it will conduct evaluation with default setting (subject 00122 with shortlong). You can comment/uncomment/add lines in jobs/splits to modify data splits.

License

We employ MIT License for the MetaAvatar code, which covers

extract_smpl_parameters.py
run_fine_tuning.py
train.py
configs
jobs/
depth2mesh/
preprocess/

The SIREN networks are borrowed from the official SIREN repository. Mesh extraction code is borrowed from the DeeSDF repository.

Modules not covered by our license are:

  1. Modified code from IP-Net (./evaluation);
  2. Modified code from SMPL-X (./human_body_prior); for these parts, please consult their respective licenses and cite the respective papers.
BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)

BabelCalib: A Universal Approach to Calibrating Central Cameras This repository contains the MATLAB implementation of the BabelCalib calibration frame

Yaroslava Lochman 55 Dec 30, 2022
nnFormer: Interleaved Transformer for Volumetric Segmentation

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation ". Please

jsguo 610 Dec 28, 2022
Duke Machine Learning Winter School: Computer Vision 2022

mlwscv2002 Welcome to the Duke Machine Learning Winter School: Computer Vision 2022! The MLWS-CV includes 3 hands-on training sessions on implementing

Duke + Data Science (+DS) 9 May 25, 2022
Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Net

Diego Porres 185 Dec 24, 2022
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"

The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow

4 Apr 20, 2022
Convert Table data to approximate values with GUI

Table_Editor Convert Table data to approximate values with GUIs... usage - Import methods for extension Tables. Imported method supposed to have only

CLJ 1 Jan 10, 2022
Replication Code for "Self-Supervised Bug Detection and Repair" NeurIPS 2021

Self-Supervised Bug Detection and Repair This is the reference code to replicate the research in Self-Supervised Bug Detection and Repair in NeurIPS 2

Microsoft 85 Dec 24, 2022
The code for paper Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm

Quantum Qubit Rotation Algorithm Single qubit rotation gates $$ U(\Theta)=\bigotimes_{i=1}^n R_x (\phi_i) $$ QQRA for the max-cut problem This code wa

SheffieldWang 0 Oct 18, 2021
CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)

CrossNorm (CN) and SelfNorm (SN) (Accepted at ICCV 2021) This is the official PyTorch implementation of our CNSN paper, in which we propose CrossNorm

100 Dec 28, 2022
Online Multi-Granularity Distillation for GAN Compression (ICCV2021)

Online Multi-Granularity Distillation for GAN Compression (ICCV2021) This repository contains the pytorch codes and trained models described in the IC

Bytedance Inc. 299 Dec 16, 2022
Demystifying How Self-Supervised Features Improve Training from Noisy Labels

Demystifying How Self-Supervised Features Improve Training from Noisy Labels This code is a PyTorch implementation of the paper "[Demystifying How Sel

<a href=[email protected]"> 4 Oct 14, 2022
Official source code of Fast Point Transformer, CVPR 2022

Fast Point Transformer Project Page | Paper This repository contains the official source code and data for our paper: Fast Point Transformer Chunghyun

182 Dec 23, 2022
Minecraft Hack Detection With Python

Minecraft Hack Detection An attempt to try and use crowd sourced replays to find

Kuleen Sasse 3 Mar 26, 2022
BanditPAM: Almost Linear-Time k-Medoids Clustering

BanditPAM: Almost Linear-Time k-Medoids Clustering This repo contains a high-performance implementation of BanditPAM from BanditPAM: Almost Linear-Tim

254 Dec 12, 2022
YoHa - A practical hand tracking engine.

YoHa - A practical hand tracking engine.

2k Jan 06, 2023
A Python wrapper for Google Tesseract

Python Tesseract Python-tesseract is an optical character recognition (OCR) tool for python. That is, it will recognize and "read" the text embedded i

Matthias A Lee 4.6k Jan 05, 2023
🌊 Online machine learning in Python

In a nutshell River is a Python library for online machine learning. It is the result of a merger between creme and scikit-multiflow. River's ambition

OnlineML 4k Jan 02, 2023
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
Research code for the paper "Variational Gibbs inference for statistical estimation from incomplete data".

Variational Gibbs inference (VGI) This repository contains the research code for Simkus, V., Rhodes, B., Gutmann, M. U., 2021. Variational Gibbs infer

Vaidotas Šimkus 1 Apr 08, 2022
[ICLR 2022] Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics

CPDeform Code and data for paper Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics at ICLR 2022 (Spotlight). @InProceed

(Lester) Sizhe Li 29 Nov 29, 2022