3D Human Pose Machines with Self-supervised Learning

Overview

3D Human Pose Machines with Self-supervised Learning

Keze Wang, Liang Lin, Chenhan Jiang, Chen Qian, and Pengxu Wei, “3D Human Pose Machines with Self-supervised Learning”. To appear in IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019.

This repository implements a 3D human pose machine to resolve 3D pose sequence generation for monocular frames, and includes a concise self-supervised correction mechanism to enhance our model by retaining the 3D geometric consistency. The main part is written in C++ and powered by Caffe deep learning toolbox. Another is written in Python and powered by Tensorflow.

Results

We proposed results on the Human3.6M, KTH Football II and MPII dataset.

   

   

   

License

This project is Only released for Academic Research Use.

Get Started

Clone the repo:

git clone https://github.com/chanyn/3Dpose_ssl.git

or directly download from https://www.dropbox.com/s/qycpjinof2ishw9/3Dpose_ssl.tar.gz?dl=0 (including datasets and well-compiled caffe under cuda-8.0)

Our code is organized as follows:

caffe-3dssl/: support caffe
models/: pretrained models and results
prototxt/: network architecture definitions
tensorflow/: code for online refine 
test/: script that run results split by action 
tools/: python and matlab code 

Requirements

  1. NVIDIA GPU and cuDNN are required to have fast speeds. For now, CUDA 8.0 with cuDNN 5.1 has been tested. The other versions should be working.
  2. Caffe Python wrapper is required.
  3. Tensorflow 1.1.0
  4. python 2.7.13
  5. MATLAB
  6. Opencv-python

Installation

  1. Build 3Dssl Caffe

       cd $ROOT/caffe-3dssl    # Follow the Caffe installation instructions here:    #   http://caffe.berkeleyvision.org/installation.html        # If you're experienced with Caffe and have all of the requirements installed    # and your Makefile.config in place, then simply do:    make all -j 8        make pycaffe    

  1. Install Tensorflow

Datasets

  • Human3.6m

  We change annotation of Human3.6m to hold 16 points ( 'RFoot' 'RKnee' 'RHip' 'LHip' 'LKnee' 'LFoot' 'Hip' 'Spine' 'Thorax' 'Head' 'RWrist' 'RElbow'  'RShoulder' 'LShoulder' 'LElbow' 'LWrist') in keeping with MPII.

  We have provided count mean file and protocol #I & protocol #III split list of Human3.6m. Follow Human3.6m website to download videos and API. We split each video per 5 frames, you can directly download processed square data in this link.  And list format of 16skel_train/test_* is [img_path] [P12dx, P12dy, P22dx, P22dy,..., P13dx, P13dy, P13dz, P23dx, P23dy, P23dz,...] clip. Clip = 0 denote reset lstm.

  shell   # files construction   h36m   |_gt # 2d and 3d annotations splited by actions   |_hg2dh36m # 2d estimation predicted by *Hourglass*, 'square' denotes prediction of square image.   |_ours_2d # 2d prediction from our model   |_ours_3d # 3d coarse prediction of *Model Extension: mask3d*   |_16skel_train_2d3d_clip.txt # train list of *Protocol I*   |_16skel_test_2d3d_clip.txt   |_16skel_train_2d3d_p3_clip.txt # train list of *Protocol III*   |_16skel_test_2d3d_p3_clip.txt   |_16point_mean_limb_scaled_max_min.csv #16 points normalize by (x-min) / (max-min)  

  After setting up Human3.6m dataset following its illustration and download the above training/testing list. You should update “root_folder” paths in CAFFE_ROOT/examples/.../*.prototxt for images and annotation director.

  • MPII

  We crop and square single person from  all images and update 2d annotation in train_h36m.txt (resort points according to order of Human3.6m points).

    mkdir data/MPII   cd data/MPII   wget -v https://drive.google.com/open?id=16gQJvf4wHLEconStLOh5Y7EzcnBUhoM-   tar -xzvf MPII_square.tar.gz   rm -f MPII_square.tar.gz  

 

Training

Offline Phase

Our model consists of two cascade modules, so the training phase can be divided into the following steps:

cd CAFFE_ROOT
  1. Pre-train the 2D pose sub-network with MPII. You can follow CPM or Hourglass or other 2D pose estimation method. We provide pretrained CPM-caffemodel. Please put it into CAFFE_ROOT/models/.

  2. Train 2D-to-3D pose transformer module with Human3.6M. And we fix the parameters of the 2D pose sub-network. The corresponding prototxt file is in examples/2D_to_3D/bilstm.prototxt.

       sh examples/2D_to_3D/train.sh    

  1. To train 3D-to-2D pose projector module, we fix the above module weights. And we need in the wild 2D Pose dataset to help training (we choose MPII).

   sh    sh examples/3D_to_2D/train.sh    

  1. Fine-tune the whole model jointly. We provide trained model and coarse prediction of Protocol I and Protocol III.

   sh    sh examples/finetune_whole/train.sh    

  1. Model extension: Add rand mask to relieve model bias. We provide corresponding model files in examples/mask3d.

   sh    sh examples/mask3d/train.sh    

Model Inference

3D-to-2D project module is initialized from the well-trained model, and they will be updated by minimizing the difference between the predicted 2D pose and projected 2D pose.

  shell   # Step1: Download the trained model   cd PROJECT_ROOT   mkdir models   cd models   wget -v https://drive.google.com/open?id=1dMuPuD_JdHuMIMapwE2DwgJ2IGK04xhQ   unzip model_extension_mask3d.zip   rm -r model_extension_mask3d.zip   cd ../     # Step2: save coarse 3D prediction   cd test   # change 'data_root' in test_human16.sh   # change 'root_folder' in template_16_merge.prototxt   # test_human16.sh [$1 deploy.prototxt] [$2 trained model] [$3 save dir] [$4 batchsize]   sh test_human16.sh . ../models/model_extension_mask3d/mask3d_iter_400000.caffemodel mask3d 5     # Step3: online refine 3D pose prediction   # protocal: 1/3 , default is 1   # pose2d: ours/hourglass/gt, default is ours   # coarse_3d: saved results in Sept2   python pred_v2.py --trained_model ../models/model_extension_mask3d/mask3d-400000.pkl --protocol 1 --data_dir /data/h36m/ --coarse_3d ../test/mask3d --save srr_results --pose2d hourglass  

 

  shell   # Maybe you want to predict 2d.   # The model we use to predict 2d pose is similar to our 3dpredict model without ssl module.   # Or you can use Hourglass(https://github.com/princeton-vl/pose-hg-demo) to predict 2d pose     # Step1.1: Download the trained merge model   cd PROJECT_ROOT   mkdir models && cd models   wget -v https://drive.google.com/open?id=19kTyttzUnm_1_7HEwoNKCXPP2QVo_zcK   unzip our2d.zip   rm -r our2d.zip   # move 2d prototxt to PROJECT_ROOT/test/   mv our2d/2d ../test/   cd ../     # Step1.2: save 2D prediction   cd test   # change 'data_root' in test_human16.sh   # change 'root_folder' in 2d/template_16_merge.prototxt   # test_human16.sh [$1 deploy.prototxt] [$2 trained model] [$3 save dir] [$4 batchsize]   sh test_human16.sh 2d/ ../models/our2d/2d_iter_800000.caffemodel our2d 5   # replace predict 2d pose in data dir or change data_dir in tensorflow/pred_v2.py   mv our2d /data/h36m/ours_2d/bilstm2d-p1-800000       # Step2 is same as above       # Step3: online refine 3D pose prediction   # protocal: 1/3 , default is 1   # pose2d: ours/hourglass/gt, default is ours   # coarse_3d: saved results in Sept2   python pred_v2.py --trained_model ../models/model_extension_mask3d/mask3d-400000.pkl --protocol 1 --data_dir /data/h36m/ --coarse_3d ../test/mask3d --save srr_results --pose2d ours  

 

  • Inference with yourself

  The only difference is that you should transfer caffemodel of 3D-to-2D project module to pkl file. We provide gen_refinepkl.py in tools/.

  sh   # Follow above Step1~2 to produce coarse 3d prediction and 2d pose.   # transfer caffemodel of SRR module to python .pkl file   python tools/gen_refinepkl.py CAFFE_ROOT CAFFEMODEL_DIR --pkl_dir model.pkl     # online refine 3D pose prediction   python pred_v2.py --trained_model model.pkl  

 

  • Evaluation

  shell   # Print MPJP   run tools/eval_h36m.m     # Visualization of 2dpose/ 3d gt pose/ 3d coarse pose/ 3d refine pose   # Please change data_root in visualization.m before running   run visualization.m  

Citation

@article{wang20193d,
  title={3D Human Pose Machines with Self-supervised Learning},
  author={Wang, Keze and Lin, Liang and Jiang, Chenhan and Qian, Chen and Wei, Pengxu},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2019},
  publisher={IEEE}
}
Owner
Chenhan Jiang
Chenhan Jiang
Pytorch implementation of "Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet"

Token Labeling: Training an 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet (arxiv) This is a Pytorch implementation of our te

蒋子航 383 Dec 27, 2022
Weight initialization schemes for PyTorch nn.Modules

nninit Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin. ##Update This repo has been

Alykhan Tejani 69 Jan 26, 2021
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022
OpenL3: Open-source deep audio and image embeddings

OpenL3 OpenL3 is an open-source Python library for computing deep audio and image embeddings. Please refer to the documentation for detailed instructi

Music and Audio Research Laboratory - NYU 326 Jan 02, 2023
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
Code for the paper "Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are in envir

Michael Janner 269 Jan 05, 2023
190 Jan 03, 2023
A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"

LASAFT-Net-v2 Listen, Attend and Separate by Attentively aggregating Frequency Transformation Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chun

Woosung Choi 29 Jun 04, 2022
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 258 Jan 02, 2023
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Xtra Computing Group 647 Jan 04, 2023
Robust Consistent Video Depth Estimation

[CVPR 2021] Robust Consistent Video Depth Estimation This repository contains Python and C++ implementation of Robust Consistent Video Depth, as descr

Facebook Research 213 Dec 17, 2022
Automatic Video Captioning Evaluation Metric --- EMScore

Automatic Video Captioning Evaluation Metric --- EMScore Overview For an illustration, EMScore can be computed as: Installation modify the encode_text

Yaya Shi 17 Nov 28, 2022
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
A-ESRGAN aims to provide better super-resolution images by using multi-scale attention U-net discriminators.

A-ESRGAN: Training Real-World Blind Super-Resolution with Attention-based U-net Discriminators The authors are hidden for the purpose of double blind

77 Dec 16, 2022
Using Convolutional Neural Networks (CNN) for Semantic Segmentation of Breast Cancer Lesions (BRCA)

Using Convolutional Neural Networks (CNN) for Semantic Segmentation of Breast Cancer Lesions (BRCA). Master's thesis documents. Bibliography, experiments and reports.

Erick Cobos 73 Dec 04, 2022
Romanian Automatic Speech Recognition from the ROBIN project

RobinASR This repository contains Robin's Automatic Speech Recognition (RobinASR) for the Romanian language based on the DeepSpeech2 architecture, tog

RACAI 10 Jan 01, 2023
A module for solving and visualizing Schrödinger equation.

qmsolve This is an attempt at making a solid, easy to use solver, capable of solving and visualize the Schrödinger equation for multiple particles, an

506 Dec 28, 2022
Warning: This project does not have any current developer. See bellow.

Pylearn2: A machine learning research library Warning : This project does not have any current developer. We will continue to review pull requests and

Laboratoire d’Informatique des Systèmes Adaptatifs 2.7k Dec 26, 2022
Code for the paper "There is no Double-Descent in Random Forests"

Code for the paper "There is no Double-Descent in Random Forests" This repository contains the code to run the experiments for our paper called "There

2 Jan 14, 2022