Populating 3D Scenes by Learning Human-Scene Interaction https://posa.is.tue.mpg.de/

Related tags

Deep LearningPOSA
Overview

Populating 3D Scenes by Learning Human-Scene Interaction

[Project Page] [Paper]

POSA Examples

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the POSA data, model and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Description

This repository contains the training, random sampling, and scene population code used for the experiments in POSA.

Installation

To install the necessary dependencies run the following command:

    pip install -r requirements.txt

The code has been tested with Python 3.7, CUDA 10.0, CuDNN 7.5 and PyTorch 1.7 on Ubuntu 20.04.

Dependencies

POSA_dir

To be able to use the code you need to get the POSA_dir.zip. After unzipping, you should have a directory with the following structure:

POSA_dir
├── cam2world
├── data
├── mesh_ds
├── scenes
├── sdf
└── trained_models

The content of each folder is explained below:

  • trained_models contains two trained models. One is trained on the contact only and the other one is trained on contact and semantics.
  • data contains the train and test data extracted from the PROX Dataset and PROX-E Dataset.
  • scenes contains the 12 scenes from PROX Dataset
  • sdf contains the signed distance field for the scenes in the previous folder.
  • mesh_ds contains mesh downsampling and upsampling related files similar to the ones in COMA.

SMPL-X

You need to get the SMPLx Body Model. Please extract the folder and rename it to smplx_models and place it in the POSA_dir above.

AGORA

In addition, you need to get the POSA_rp_poses.zip file from AGORA Dataset and extract in the POSA_dir. This file contrains a number of test poses to be used in the next steps. Note that you don't need the whole AGORA dataset.

Finally run the following command or add it to your ~/.bashrc

export POSA_dir=Path of Your POSA_dir

Inference

You can test POSA using the trained models provided. Below we provide examples of how to generate POSA features and how to pupulate a 3D scene.

Random Sampling

To generate random features from a trained model, run the following command

python src/gen_rand_samples.py --config cfg_files/contact.yaml --checkpoint_path $POSA_dir/trained_models/contact.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --render 1 --viz 1 --num_rand_samples 3 

Or

python src/gen_rand_samples.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --render 1 --viz 1 --num_rand_samples 3 

This will open a window showing the generated features for the specified pkl file. It also render the features to the folder random_samples in POSA_dir.

The number of generated feature maps can be controlled by the flag num_rand_samples.

If you don't have a screen, you can turn off the visualization --viz 0.

If you don't have CUDA installed then you can add this flag --use_cuda 0. This applies to all commands in this repository.

You can also run the same command on the whole folder of test poses

python src/gen_rand_samples.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses --render 1 --viz 1 --num_rand_samples 3 

Scene Population

Given a body mesh from the AGORA Dataset, POSA automatically places the body mesh in 3D scene.

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --scene_name MPH16 --render 1 --viz 1 

This will open a window showing the placed body in the scene. It also render the placements to the folder affordance in POSA_dir.

You can control the number of placements for the same body mesh in a scene using the flag num_rendered_samples, default value is 1.

The generated feature maps can be shown by setting adding --show_gen_sample 1

You can also run the same script on the whole folder of test poses

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses --scene_name MPH16 --render 1 --viz 1 

To place clothed body meshes, you need to first buy the Renderpeople assets, or get the free models. Create a folder rp_clothed_meshes in POSA_dir and place all the clothed body .obj meshes in this folder. Then run this command:

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --scene_name MPH16 --render 1 --viz 1 --use_clothed_mesh 1

Testing on Your Own Poses

POSA has been tested on the AGORA dataset only. Nonetheless, you can try POSA with any SMPL-X poses you have. You just need a .pkl file with the SMPLX body parameters and the gender. Your SMPL-X vertices must be brought to a canonical form similar to the POSA training data. This means the vertices should be centered at the pelvis joint, the x axis pointing to the left, the y axis pointing backward, and the z axis pointing upwards. As shown in the figure below. The x,y,z axes are denoted by the red, green, blue colors respectively.

canonical_form

See the function pkl_to_canonical in data_utils.py for an example of how to do this transformation.

Training

To retrain POSA from scratch run the following command

python src/train_posa.py --config cfg_files/contact_semantics.yaml

Visualize Ground Truth Data

You can also visualize the training data

python src/show_gt.py --config cfg_files/contact_semantics.yaml --train_data 1

Or test data

python src/show_gt.py --config cfg_files/contact_semantics.yaml --train_data 0

Note that the ground truth data has been downsampled to speed up training as explained in the paper. See training details in appendices.

Citation

If you find this Model & Software useful in your research we would kindly ask you to cite:

@inproceedings{Hassan:CVPR:2021,
    title = {Populating {3D} Scenes by Learning Human-Scene Interaction},
    author = {Hassan, Mohamed and Ghosh, Partha and Tesch, Joachim and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {Proceedings {IEEE/CVF} Conf.~on Computer Vision and Pattern Recognition ({CVPR})},
    month = jun,
    month_numeric = {6},
    year = {2021}
}

If you use the extracted training data, scenes or sdf the please cite:

@inproceedings{PROX:2019,
  title = {Resolving {3D} Human Pose Ambiguities with {3D} Scene Constraints},
  author = {Hassan, Mohamed and Choutas, Vasileios and Tzionas, Dimitrios and Black, Michael J.},
  booktitle = {International Conference on Computer Vision},
  month = oct,
  year = {2019},
  url = {https://prox.is.tue.mpg.de},
  month_numeric = {10}
}
@inproceedings{PSI:2019,
  title = {Generating 3D People in Scenes without People},
  author = {Zhang, Yan and Hassan, Mohamed and Neumann, Heiko and Black, Michael J. and Tang, Siyu},
  booktitle = {Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2020},
  url = {https://arxiv.org/abs/1912.02923},
  month_numeric = {6}
}

If you use the AGORA test poses, the please cite:

@inproceedings{Patel:CVPR:2021,
  title = {{AGORA}: Avatars in Geography Optimized for Regression Analysis},
  author = {Patel, Priyanka and Huang, Chun-Hao P. and Tesch, Joachim and Hoffmann, David T. and Tripathi, Shashank and Black, Michael J.},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2021},
  month_numeric = {6}
}

Contact

For commercial licensing (and all related questions for business applications), please contact [email protected].

Owner
Mohamed Hassan
Mohamed Hassan
Style transfer between images was performed using the VGG19 model

Style transfer between images was performed using the VGG19 model. The necessary codes, libraries and all other information of this project are available below

Onur yılmaz 2 May 09, 2022
Multi Task RL Baselines

MTRL Multi Task RL Algorithms Contents Introduction Setup Usage Documentation Contributing to MTRL Community Acknowledgements Introduction M

Facebook Research 171 Jan 09, 2023
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

jmucsx 17 Dec 14, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 09, 2021
Contra is a lightweight, production ready Tensorflow alternative for solving time series prediction challenges with AI

Contra AI Engine A lightweight, production ready Tensorflow alternative developed by Styvio styvio.com » How to Use · Report Bug · Request Feature Tab

styvio 14 May 25, 2022
The materials used in the SaxonJS tutorial presented at Declarative Amsterdam, 2021

SaxonJS-Tutorial-2021, version 1.0.4 Last updated on 4 November, 2021. Table of contents Background Prerequisites Starting a web server Running a Java

Saxonica 11 Oct 23, 2022
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 09, 2022
A benchmark dataset for emulating atmospheric radiative transfer in weather and climate models with machine learning (NeurIPS 2021 Datasets and Benchmarks Track)

ClimART - A Benchmark Dataset for Emulating Atmospheric Radiative Transfer in Weather and Climate Models Official PyTorch Implementation Using deep le

21 Dec 31, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021
pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

pybaum provides tools to work with pytrees which is a concept burrowed from JAX.

Open Source Economics 9 May 11, 2022
Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks

Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks by Ángel López García-Arias, Masanori Hashimoto, Masato Motomura, and J

Ángel López García-Arias 4 May 19, 2022
Project of 'TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement '

TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement Codes for TMM20 paper "TBEFN: A Two-branch Exposure-fusion Network for Low

KUN LU 31 Nov 06, 2022
Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks

Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks Abstract Facial expression recognition in video

Bogireddy Sai Prasanna Teja Reddy 103 Dec 29, 2022
DualGAN-tensorflow: tensorflow implementation of DualGAN

ICCV paper of DualGAN DualGAN: unsupervised dual learning for image-to-image translation please cite the paper, if the codes has been used for your re

Jack Yi 252 Nov 10, 2022
Portfolio analytics for quants, written in Python

QuantStats: Portfolio analytics for quants QuantStats Python library that performs portfolio profiling, allowing quants and portfolio managers to unde

Ran Aroussi 2.7k Jan 08, 2023
This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"

Fisher Information Loss This repository contains code that can be used to reproduce the experimental results presented in the paper: Awni Hannun, Chua

Facebook Research 43 Dec 30, 2022
Active and Sample-Efficient Model Evaluation

Active Testing: Sample-Efficient Model Evaluation Hi, good to see you here! 👋 This is code for "Active Testing: Sample-Efficient Model Evaluation". P

Jannik Kossen 19 Oct 30, 2022
Cleaned up code for DSTC 10: SIMMC 2.0 track: subtask 2: multimodal coreference resolution

UNITER-Based Situated Coreference Resolution with Rich Multimodal Input: arXiv MMCoref_cleaned Code for the MMCoref task of the SIMMC 2.0 dataset. Pre

Yichen (William) Huang 2 Dec 05, 2022
Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.

EfficientZero (NeurIPS 2021) Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021. Environments Effi

Weirui Ye 671 Jan 03, 2023
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023