CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

Overview

SMPLify-XMC

This repo is part of our project: On Self-Contact and Human Pose.
[Project Page] [Paper] [MPI Project Page]

Teaser SMPLify-XMC

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the TUCH data and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Description and Demo

SMPLify-XMC adapts SMPLify-X to fit SMPL-X model to Mimic The Pose (MTP) data. To run SMPLify-XMC you need

  • an image of a person mimicking a presented pose
  • the presented pose parameters
  • the person's gender, height and weight
  • the OpenPose keypoints.

The code has been tested with Python 3.6.9, CUDA 10.1, CuDNN 7.5 and PyTorch 1.8.1 on Ubuntu 18.04.

Installation

1) Clone this repo

git clone [email protected]:muelea/smplify-xmc.git
cd smplify-xmc

2) Download body model

Download them SMPL-X body model from https://smpl-x.is.tue.mpg.de and save it in MODEL_FOLDER. You can replace model_folder: MODEL_FOLDER in the config file configs/fit_smplx_singleview.yaml or use an environment variable.

3) Download essentials

Download essentials from here and unpack to ESSENTIALS_DIR. Then create symlinks between the essentials and this repo:

ln -s $ESSENTIALS_DIR/smplify-xmc-essentials data/essentials

4) Create python virtual environment

python3 -m venv $YOUR_VENV_DIR/smplify-xmc
source $YOUR_VENV_DIR/smplify-xmc/bin/activate

5) Install requirements

pip install -r requirements.txt

6) Get dependencies

Clone self-contact repo, e.g. to YOUR_PYTHON_PACKAGE_DIR. Then use pip to install the package. Then you can import the self-contact functions from any place in your system. (make sure your venv is activated).

cd $YOUR_PYTHON_PACKAGE_DIR
git clone [email protected]:muelea/selfcontact.git
cd selfcontact
rm -r .git
pip install .
cd ..

Demo using our example data

You can find our example dataset in this repo under data/example_input. The following command will automatically save parameters, mesh, and image under output_dir:

python main_singleview.py --config configs/fit_smplx_singleview.yaml \
--dataset mtp_demo \
--input_base_dir data/example_input/singleview/subject1 \
--input_dir_poses data/example_input/presented_poses \
--output_dir data/example_output/singleview/subject1 \
--model_folder $MODELS_FOLDER

Process the MTP dataset:

Download MTP data from the TUCH website: https://tuch.is.tue.mpg.de and save the data in DS_DIR. You should now see a folder named $DS_DIR/mtp.

Read MTP data: python lib/dataextra/preprocess_mtp_mturk_dataset.py --ds_dir=$DS_DIR/mtp

Process the first item: python main_singleview.py --config configs/fit_smplx_singleview_mtp_dataset.yaml --db_file data/dbs/mtp_mturk.npz --output_dir data/example_output/mtp/ --model_folder=$MODEL_FOLDER --cluster_bs=1 --ds_start_idx=0

Process your own data:

Follow the structure of the example data in data/example_input. Create a folder PP_FOLDER for the presented poses:

PP_FOLDER
  ----pose_name1.pkl
  ----pose_name2.pkl

The pickle file should contain a dictionary with the pose parameters and the vertices. If you include the vertices ('v'), the vertices in contact will be computed automatically.

data = {
  'body_pose': ..
  'right_hand_pose': ..
  'left_hand_pose': ..
  'global_orient': ..
  'v': .. #vertices

}

Then create a folder MI_FOLDER for the mimicked images, following the structure below. Compute the keypoints for each image from OpenPose. The meta file should contain the gender, height and weight of the subject mimicking the pose.

MI_FOLDER
  ----subject_name1
    ----images
      ----pose_name1.png
      ----pose_name2.png
    ----keypoints
      ----pose_name1.json
      ----pose_name2.json
    ----meta.yaml

Finally, run the fitting code:

python main_singleview.py --config configs/fit_smplx_singleview.yaml \
--input_base_dir $MI_FOLDER/subject_name1 \
--input_dir_poses $PP_FOLDER \
--output_dir data/example_output/subject_name1

Citation

@inproceedings{Mueller:CVPR:2021,
  title = {On Self-Contact and Human Pose},
  author = {M{\"u}ller, Lea and Osman, Ahmed A. A. and Tang, Siyu and Huang, Chun-Hao P. and Black, Michael J.},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recogßnition (CVPR)},
  month = jun,
  year = {2021},
  doi = {},
  month_numeric = {6}
}

Acknowledgement

We thank Vassilis Choutas and Georgios Pavlakos for publishing the SMPLify-X code: https://github.com/vchoutas/smplify-x. This has allowed us to build our code on top of it and continue to use important features, such as the prior or optimization. Again, special thanks to Vassilis Choutas for his implementation of the generalized winding numbers and the measurements code. We also thank our data capture and admin team for their help with the extensive data collection on Mechanical Turk and in the Capture Hall. Many thanks to all subjects who contributed to this dataset in the scanner and on the Internet. Thanks to all PS members who proofread the script and did not understand it and the reviewers, who helped improving during the rebuttal. Lea Mueller and Ahmed A. A. Osman thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting them. We thank the wonderful PS department for their questions and support.

Contact

For questions, please contact [email protected]

For commercial licensing (and all related questions for business applications), please contact [email protected].

Owner
Lea Müller
PhD student in the Perceiving Systems Department at the Max Planck Institute for Intelligent Systems in Tübingen, Germany.
Lea Müller
Implementation of Nalbach et al. 2017 paper.

Deep Shading Convolutional Neural Networks for Screen-Space Shading Our project is based on Nalbach et al. 2017 paper. In this project, a set of buffe

Marcel Santana 17 Sep 08, 2022
The implementation for "Comprehensive Knowledge Distillation with Causal Intervention".

Comprehensive Knowledge Distillation with Causal Intervention This repository is a PyTorch implementation of "Comprehensive Knowledge Distillation wit

Xiang Deng 10 Nov 03, 2022
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads-Tutorial-3 Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads Inc 2 Jan 03, 2022
Official implementation for "Symbolic Learning to Optimize: Towards Interpretability and Scalability"

Symbolic Learning to Optimize This is the official implementation for ICLR-2022 paper "Symbolic Learning to Optimize: Towards Interpretability and Sca

VITA 8 Dec 19, 2022
End-to-End Object Detection with Fully Convolutional Network

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

472 Dec 22, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Use .csv files to record, play and evaluate motion capture data.

Purpose These scripts allow you to record mocap data to, and play from .csv files. This approach facilitates parsing of body movement data in statisti

21 Dec 12, 2022
AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning AutoPentest-DRL is an automated penetration testing framework based o

Cyber Range Organization and Design Chair 217 Jan 01, 2023
Implementation for Curriculum DeepSDF

Curriculum-DeepSDF This repository is an implementation for Curriculum DeepSDF. Full paper is available here. Preparation Please follow original setti

Haidong Zhu 69 Dec 29, 2022
🧑‍🔬 verify your TEAL program by experiment and observation

Graviton - Testing TEAL with Dry Runs Tutorial Local Installation The following instructions assume that you have make available in your local environ

Algorand 18 Jan 03, 2023
PyTorch implementation of neural style randomization for data augmentation

README Augment training images for deep neural networks by randomizing their visual style, as described in our paper: https://arxiv.org/abs/1809.05375

84 Nov 23, 2022
MoViNets PyTorch implementation: Mobile Video Networks for Efficient Video Recognition;

MoViNet-pytorch Pytorch unofficial implementation of MoViNets: Mobile Video Networks for Efficient Video Recognition. Authors: Dan Kondratyuk, Liangzh

189 Dec 20, 2022
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
Finetuning Pipeline

KLUE Baseline Korean(한국어) KLUE-baseline contains the baseline code for the Korean Language Understanding Evaluation (KLUE) benchmark. See our paper fo

74 Dec 13, 2022
Research on controller area network Intrusion Detection Systems

Group members information Member 1: Lixue Liang Member 2: Yuet Lee Chan Member 3: Xinruo Zhang Member 4: Yifei Han User Manual Generate Attack Packets

Roche 4 Aug 30, 2022
TensorFlow implementation of "Attention is all you need (Transformer)"

[TensorFlow 2] Attention is all you need (Transformer) TensorFlow implementation of "Attention is all you need (Transformer)" Dataset The MNIST datase

YeongHyeon Park 4 Jan 05, 2022
Jarvis Project is a basic virtual assistant that uses TensorFlow for learning.

Jarvis_proyect Jarvis Project is a basic virtual assistant that uses TensorFlow for learning. Latest version 0.1 Features: Good morning protocol Tell

Anze Kovac 3 Aug 31, 2022
[NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data

MosaicKD Code for NeurIPS-21 paper "Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data" 1. Motivation Natural images share common l

ZJU-VIPA 37 Nov 10, 2022