Populating 3D Scenes by Learning Human-Scene Interaction https://posa.is.tue.mpg.de/

Related tags

Deep LearningPOSA
Overview

Populating 3D Scenes by Learning Human-Scene Interaction

[Project Page] [Paper]

POSA Examples

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the POSA data, model and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Description

This repository contains the training, random sampling, and scene population code used for the experiments in POSA.

Installation

To install the necessary dependencies run the following command:

    pip install -r requirements.txt

The code has been tested with Python 3.7, CUDA 10.0, CuDNN 7.5 and PyTorch 1.7 on Ubuntu 20.04.

Dependencies

POSA_dir

To be able to use the code you need to get the POSA_dir.zip. After unzipping, you should have a directory with the following structure:

POSA_dir
├── cam2world
├── data
├── mesh_ds
├── scenes
├── sdf
└── trained_models

The content of each folder is explained below:

  • trained_models contains two trained models. One is trained on the contact only and the other one is trained on contact and semantics.
  • data contains the train and test data extracted from the PROX Dataset and PROX-E Dataset.
  • scenes contains the 12 scenes from PROX Dataset
  • sdf contains the signed distance field for the scenes in the previous folder.
  • mesh_ds contains mesh downsampling and upsampling related files similar to the ones in COMA.

SMPL-X

You need to get the SMPLx Body Model. Please extract the folder and rename it to smplx_models and place it in the POSA_dir above.

AGORA

In addition, you need to get the POSA_rp_poses.zip file from AGORA Dataset and extract in the POSA_dir. This file contrains a number of test poses to be used in the next steps. Note that you don't need the whole AGORA dataset.

Finally run the following command or add it to your ~/.bashrc

export POSA_dir=Path of Your POSA_dir

Inference

You can test POSA using the trained models provided. Below we provide examples of how to generate POSA features and how to pupulate a 3D scene.

Random Sampling

To generate random features from a trained model, run the following command

python src/gen_rand_samples.py --config cfg_files/contact.yaml --checkpoint_path $POSA_dir/trained_models/contact.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --render 1 --viz 1 --num_rand_samples 3 

Or

python src/gen_rand_samples.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --render 1 --viz 1 --num_rand_samples 3 

This will open a window showing the generated features for the specified pkl file. It also render the features to the folder random_samples in POSA_dir.

The number of generated feature maps can be controlled by the flag num_rand_samples.

If you don't have a screen, you can turn off the visualization --viz 0.

If you don't have CUDA installed then you can add this flag --use_cuda 0. This applies to all commands in this repository.

You can also run the same command on the whole folder of test poses

python src/gen_rand_samples.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses --render 1 --viz 1 --num_rand_samples 3 

Scene Population

Given a body mesh from the AGORA Dataset, POSA automatically places the body mesh in 3D scene.

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --scene_name MPH16 --render 1 --viz 1 

This will open a window showing the placed body in the scene. It also render the placements to the folder affordance in POSA_dir.

You can control the number of placements for the same body mesh in a scene using the flag num_rendered_samples, default value is 1.

The generated feature maps can be shown by setting adding --show_gen_sample 1

You can also run the same script on the whole folder of test poses

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses --scene_name MPH16 --render 1 --viz 1 

To place clothed body meshes, you need to first buy the Renderpeople assets, or get the free models. Create a folder rp_clothed_meshes in POSA_dir and place all the clothed body .obj meshes in this folder. Then run this command:

python src/affordance.py --config cfg_files/contact_semantics.yaml --checkpoint_path $POSA_dir/trained_models/contact_semantics.pt --pkl_file_path $POSA_dir/POSA_rp_poses/rp_aaron_posed_001_0_0.pkl --scene_name MPH16 --render 1 --viz 1 --use_clothed_mesh 1

Testing on Your Own Poses

POSA has been tested on the AGORA dataset only. Nonetheless, you can try POSA with any SMPL-X poses you have. You just need a .pkl file with the SMPLX body parameters and the gender. Your SMPL-X vertices must be brought to a canonical form similar to the POSA training data. This means the vertices should be centered at the pelvis joint, the x axis pointing to the left, the y axis pointing backward, and the z axis pointing upwards. As shown in the figure below. The x,y,z axes are denoted by the red, green, blue colors respectively.

canonical_form

See the function pkl_to_canonical in data_utils.py for an example of how to do this transformation.

Training

To retrain POSA from scratch run the following command

python src/train_posa.py --config cfg_files/contact_semantics.yaml

Visualize Ground Truth Data

You can also visualize the training data

python src/show_gt.py --config cfg_files/contact_semantics.yaml --train_data 1

Or test data

python src/show_gt.py --config cfg_files/contact_semantics.yaml --train_data 0

Note that the ground truth data has been downsampled to speed up training as explained in the paper. See training details in appendices.

Citation

If you find this Model & Software useful in your research we would kindly ask you to cite:

@inproceedings{Hassan:CVPR:2021,
    title = {Populating {3D} Scenes by Learning Human-Scene Interaction},
    author = {Hassan, Mohamed and Ghosh, Partha and Tesch, Joachim and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {Proceedings {IEEE/CVF} Conf.~on Computer Vision and Pattern Recognition ({CVPR})},
    month = jun,
    month_numeric = {6},
    year = {2021}
}

If you use the extracted training data, scenes or sdf the please cite:

@inproceedings{PROX:2019,
  title = {Resolving {3D} Human Pose Ambiguities with {3D} Scene Constraints},
  author = {Hassan, Mohamed and Choutas, Vasileios and Tzionas, Dimitrios and Black, Michael J.},
  booktitle = {International Conference on Computer Vision},
  month = oct,
  year = {2019},
  url = {https://prox.is.tue.mpg.de},
  month_numeric = {10}
}
@inproceedings{PSI:2019,
  title = {Generating 3D People in Scenes without People},
  author = {Zhang, Yan and Hassan, Mohamed and Neumann, Heiko and Black, Michael J. and Tang, Siyu},
  booktitle = {Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2020},
  url = {https://arxiv.org/abs/1912.02923},
  month_numeric = {6}
}

If you use the AGORA test poses, the please cite:

@inproceedings{Patel:CVPR:2021,
  title = {{AGORA}: Avatars in Geography Optimized for Regression Analysis},
  author = {Patel, Priyanka and Huang, Chun-Hao P. and Tesch, Joachim and Hoffmann, David T. and Tripathi, Shashank and Black, Michael J.},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2021},
  month_numeric = {6}
}

Contact

For commercial licensing (and all related questions for business applications), please contact [email protected].

Owner
Mohamed Hassan
Mohamed Hassan
A PyTorch Reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution

TecoGAN-PyTorch Introduction This is a PyTorch reimplementation of TecoGAN: Temporally Coherent GAN for Video Super-Resolution (VSR). Please refer to

165 Dec 17, 2022
Convnext-tf - Unofficial tensorflow keras implementation of ConvNeXt

ConvNeXt Tensorflow This is unofficial tensorflow keras implementation of ConvNe

29 Oct 06, 2022
SwinTrack: A Simple and Strong Baseline for Transformer Tracking

SwinTrack This is the official repo for SwinTrack. A Simple and Strong Baseline Prerequisites Environment conda (recommended) conda create -y -n SwinT

LitingLin 196 Jan 04, 2023
CRF-RNN for Semantic Image Segmentation - PyTorch version

This repository contains the official PyTorch implementation of the "CRF-RNN" semantic image segmentation method, published in the ICCV 2015

Sadeep Jayasumana 170 Dec 13, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 02, 2023
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Brian 1.4k Jan 04, 2023
Kaggle Ultrasound Nerve Segmentation competition [Keras]

Ultrasound nerve segmentation using Keras (1.0.7) Kaggle Ultrasound Nerve Segmentation competition [Keras] #Install (Ubuntu {14,16}, GPU) cuDNN requir

179 Dec 28, 2022
DrQ-v2: Improved Data-Augmented Reinforcement Learning

DrQ-v2: Improved Data-Augmented RL Agent Method DrQ-v2 is a model-free off-policy algorithm for image-based continuous control. DrQ-v2 builds on DrQ,

Facebook Research 234 Jan 01, 2023
RANZCR-CLiP 7th Place Solution

RANZCR-CLiP 7th Place Solution This repository is WIP. (18 Mar 2021) Installation git clone https://github.com/analokmaus/kaggle-ranzcr-clip-public.gi

Hiroshechka Y 21 Oct 22, 2022
Fuzzy Overclustering (FOC)

Fuzzy Overclustering (FOC) In real-world datasets, we need consistent annotations between annotators to give a certain ground-truth label. However, in

2 Nov 08, 2022
LineBoard - Python+React+MySQL-白板即時系統改善人群行為

LineBoard-白板即時系統改善人群行為 即時顯示實驗室的使用狀況,並遠端預約排隊,以此來改善人們的工作效率 程式架構 運作流程 使用者先至該實驗室網站預約

Bo-Jyun Huang 1 Feb 22, 2022
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.

Bottom-Up and Top-Down Attention for Visual Question Answering An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge. The

Hengyuan Hu 731 Jan 03, 2023
Get started learning C# with C# notebooks powered by .NET Interactive and VS Code.

.NET Interactive Notebooks for C# Welcome to the home of .NET interactive notebooks for C#! How to Install Download the .NET Coding Pack for VS Code f

.NET Platform 425 Dec 25, 2022
Combining Diverse Feature Priors

Combining Diverse Feature Priors This repository contains code for reproducing the results of our paper. Paper: https://arxiv.org/abs/2110.08220 Blog

Madry Lab 5 Nov 12, 2022
Python3 / PyTorch implementation of the following paper: Fine-grained Semantics-aware Representation Enhancement for Self-supervisedMonocular Depth Estimation. ICCV 2021 (oral)

FSRE-Depth This is a Python3 / PyTorch implementation of FSRE-Depth, as described in the following paper: Fine-grained Semantics-aware Representation

77 Dec 28, 2022
Multi-objective constrained optimization for energy applications via tree ensembles

Multi-objective constrained optimization for energy applications via tree ensembles

C⚙G - Imperial College London 1 Nov 19, 2021
All course materials for the Zero to Mastery Machine Learning and Data Science course.

Zero to Mastery Machine Learning Welcome! This repository contains all of the code, notebooks, images and other materials related to the Zero to Maste

Daniel Bourke 1.6k Jan 08, 2023
Human Pose estimation with TensorFlow framework

Human Pose Estimation with TensorFlow Here you can find the implementation of the Human Body Pose Estimation algorithm, presented in the DeeperCut and

Eldar Insafutdinov 1.1k Dec 29, 2022
This is a repository of our model for weakly-supervised video dense anticipation.

Introduction This is a repository of our model for weakly-supervised video dense anticipation. More results on GTEA, Epic-Kitchens etc. will come soon

2 Apr 09, 2022
FaceAnon - Anonymize people in images and videos using yolov5-crowdhuman

Face Anonymizer Blur faces from image and video files in /input/ folder. Require

22 Nov 03, 2022