Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate

Overview

News

  • 05/17/2021 To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Volumes, Multi-view Neural Human Rendering, and Deferred Neural Human Rendering.
  • 05/13/2021 To make the following works easier compare with our model, we save our rendering results of ZJU-MoCap at here and write a document that describes the training and test protocols.
  • 05/12/2021 The code supports the test and visualization on unseen human poses.
  • 05/12/2021 We update the ZJU-MoCap dataset with better fitted SMPL using EasyMocap. We also release a website for visualization. Please see here for the usage of provided smpl parameters.

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

Project Page | Video | Paper | Data

monocular

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
CVPR 2021

Any questions or discussions are welcomed!

Installation

Please see INSTALL.md for manual installation.

Installation using docker

Please see docker/README.md.

Thanks to Zhaoyi Wan for providing the docker implementation.

Run the code on the custom dataset

Please see CUSTOM.

Run the code on People-Snapshot

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Process People-Snapshot

We already provide some processed data. If you want to process more videos of People-Snapshot, you could use tools/process_snapshot.py.

You can also visualize smpl parameters of People-Snapshot with tools/vis_snapshot.py.

Visualization on People-Snapshot

Take the visualization on female-3-casual as an example. The command lines for visualization are recorded in visualize.sh.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/female3c/latest.pth.

  2. Visualization:

    • Visualize novel views of single frame
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_view True num_render_views 144
    

    monocular

    • Visualize views of dynamic humans with fixed camera
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_pose True
    

    monocular

    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_mesh True train.num_workers 0
    # visualize a specific mesh
    python tools/render_mesh.py --exp_name female3c --dataset people_snapshot --mesh_ind 226
    

    monocular

  3. The results of visualization are located at $ROOT/data/render/female3c and $ROOT/data/perform/female3c.

Training on People-Snapshot

Take the training on female-3-casual as an example. The command lines for training are recorded in train.sh.

  1. Train:
    # training
    python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False
    # distributed training
    python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False gpus "0, 1, 2, 3" distributed True
    
  2. Train with white background:
    # training
    python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False white_bkgd True
    
  3. Tensorboard:
    tensorboard --logdir data/record/if_nerf
    

Run the code on ZJU-MoCap

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Potential problems of provided smpl parameters

  1. The newly fitted parameters locate in new_params. Currently, the released pretrained models are trained on previously fitted parameters, which locate in params.
  2. The smpl parameters of ZJU-MoCap have different definition from the one of MPI's smplx.
    • If you want to extract vertices from the provided smpl parameters, please use zju_smpl/extract_vertices.py.
    • The reason that we use the current definition is described at here.

It is okay to train Neural Body with smpl parameters fitted by smplx.

Test on ZJU-MoCap

The command lines for test are recorded in test.sh.

Take the test on sequence 313 as an example.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth.
  2. Test on training human poses:
    python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313
    
  3. Test on unseen human poses:
    python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 test_novel_pose True
    

Visualization on ZJU-MoCap

Take the visualization on sequence 313 as an example. The command lines for visualization are recorded in visualize.sh.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth.

  2. Visualization:

    • Visualize novel views of single frame
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True
    

    zju_mocap

    • Visualize novel views of single frame by rotating the SMPL model
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True num_render_views 100
    

    zju_mocap

    • Visualize views of dynamic humans with fixed camera
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000 num_render_views 1
    

    zju_mocap

    • Visualize views of dynamic humans with rotated camera
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000
    

    zju_mocap

    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_mesh True train.num_workers 0
    # visualize a specific mesh
    python tools/render_mesh.py --exp_name xyzc_313 --dataset zju_mocap --mesh_ind 0
    

    zju_mocap

  3. The results of visualization are located at $ROOT/data/render/xyzc_313 and $ROOT/data/perform/xyzc_313.

Training on ZJU-MoCap

Take the training on sequence 313 as an example. The command lines for training are recorded in train.sh.

  1. Train:
    # training
    python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False
    # distributed training
    python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False gpus "0, 1, 2, 3" distributed True
    
  2. Train with white background:
    # training
    python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False white_bkgd True
    
  3. Tensorboard:
    tensorboard --logdir data/record/if_nerf
    

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{peng2021neural,
  title={Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans},
  author={Peng, Sida and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2021}
}
Owner
ZJU3DV
ZJU3DV is a research group of State Key Lab of CAD&CG, Zhejiang University. We focus on the research of 3D computer vision, SLAM and AR.
ZJU3DV
Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation

Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation (AAAI 2021) Official pytorch implementation of our paper: Discriminative

Beom 74 Dec 27, 2022
PyTorch implementation of "LayoutTransformer: Layout Generation and Completion with Self-attention"

PyTorch implementation of "LayoutTransformer: Layout Generation and Completion with Self-attention" to appear in ICCV 2021

Kamal Gupta 75 Dec 23, 2022
[3DV 2021] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

dispersion-score Official implementation of 3DV 2021 Paper A Dataset-dispersion Perspective on Reconstruction versus Recognition in Single-view 3D Rec

Yefan 7 May 28, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine

LSHTM_RCS This repository contains project created during the Data Challenge module at London School of Hygiene & Tropical Medicine (LSHTM) in collabo

Lukas Kopecky 3 Jan 30, 2022
Neural Logic Inductive Learning

Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn

36 Nov 28, 2022
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
Official Implementation of SWAD (NeurIPS 2021)

SWAD: Domain Generalization by Seeking Flat Minima (NeurIPS'21) Official PyTorch implementation of SWAD: Domain Generalization by Seeking Flat Minima.

Junbum Cha 97 Dec 20, 2022
3 Apr 20, 2022
A Strong Baseline for Image Semantic Segmentation

A Strong Baseline for Image Semantic Segmentation Introduction This project is an open source semantic segmentation toolbox based on PyTorch. It is ba

Clark He 49 Sep 20, 2022
Measure WWjj polarization fraction

WlWl Polarization Measure WWjj polarization fraction Paper: arXiv:2109.09924 Notice: This code can only be used for the inference process, if you want

4 Apr 10, 2022
Hydra Lightning Template for Structured Configs

Hydra Lightning Template for Structured Configs Template for creating projects with pytorch-lightning and hydra. How to use this template? Create your

Model-driven Machine Learning 4 Jul 19, 2022
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!

Rubicon Purpose Rubicon is a data science tool that captures and stores model training and execution information, like parameters and outcomes, in a r

Capital One 97 Jan 03, 2023
Companion code for the paper Theoretical characterization of uncertainty in high-dimensional linear classification

Companion code for the paper Theoretical characterization of uncertainty in high-dimensional linear classification Usage The required packages are lis

0 Feb 07, 2022
Tooling for GANs in TensorFlow

TensorFlow-GAN (TF-GAN) TF-GAN is a lightweight library for training and evaluating Generative Adversarial Networks (GANs). Can be installed with pip

803 Dec 24, 2022
PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning

Learning to Reweight Examples for Robust Deep Learning Unofficial PyTorch implementation of Learning to Reweight Examples for Robust Deep Learning. Th

Daniel Stanley Tan 325 Dec 28, 2022
PantheonRL is a package for training and testing multi-agent reinforcement learning environments.

PantheonRL is a package for training and testing multi-agent reinforcement learning environments. PantheonRL supports cross-play, fine-tuning, ad-hoc coordination, and more.

Stanford Intelligent and Interactive Autonomous Systems Group 57 Dec 28, 2022
The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation Project Page | Paper Table of Contents Setup

Xiaoming Zhao 41 Dec 15, 2022
Data and extra materials for the food safety publications classifier

Data and extra materials for the food safety publications classifier The subdirectories contain detailed descriptions of their contents in the README.

1 Jan 20, 2022