3D Human Pose Machines with Self-supervised Learning

Overview

3D Human Pose Machines with Self-supervised Learning

Keze Wang, Liang Lin, Chenhan Jiang, Chen Qian, and Pengxu Wei, “3D Human Pose Machines with Self-supervised Learning”. To appear in IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019.

This repository implements a 3D human pose machine to resolve 3D pose sequence generation for monocular frames, and includes a concise self-supervised correction mechanism to enhance our model by retaining the 3D geometric consistency. The main part is written in C++ and powered by Caffe deep learning toolbox. Another is written in Python and powered by Tensorflow.

Results

We proposed results on the Human3.6M, KTH Football II and MPII dataset.

   

   

   

License

This project is Only released for Academic Research Use.

Get Started

Clone the repo:

git clone https://github.com/chanyn/3Dpose_ssl.git

or directly download from https://www.dropbox.com/s/qycpjinof2ishw9/3Dpose_ssl.tar.gz?dl=0 (including datasets and well-compiled caffe under cuda-8.0)

Our code is organized as follows:

caffe-3dssl/: support caffe
models/: pretrained models and results
prototxt/: network architecture definitions
tensorflow/: code for online refine 
test/: script that run results split by action 
tools/: python and matlab code 

Requirements

  1. NVIDIA GPU and cuDNN are required to have fast speeds. For now, CUDA 8.0 with cuDNN 5.1 has been tested. The other versions should be working.
  2. Caffe Python wrapper is required.
  3. Tensorflow 1.1.0
  4. python 2.7.13
  5. MATLAB
  6. Opencv-python

Installation

  1. Build 3Dssl Caffe

       cd $ROOT/caffe-3dssl    # Follow the Caffe installation instructions here:    #   http://caffe.berkeleyvision.org/installation.html        # If you're experienced with Caffe and have all of the requirements installed    # and your Makefile.config in place, then simply do:    make all -j 8        make pycaffe    

  1. Install Tensorflow

Datasets

  • Human3.6m

  We change annotation of Human3.6m to hold 16 points ( 'RFoot' 'RKnee' 'RHip' 'LHip' 'LKnee' 'LFoot' 'Hip' 'Spine' 'Thorax' 'Head' 'RWrist' 'RElbow'  'RShoulder' 'LShoulder' 'LElbow' 'LWrist') in keeping with MPII.

  We have provided count mean file and protocol #I & protocol #III split list of Human3.6m. Follow Human3.6m website to download videos and API. We split each video per 5 frames, you can directly download processed square data in this link.  And list format of 16skel_train/test_* is [img_path] [P12dx, P12dy, P22dx, P22dy,..., P13dx, P13dy, P13dz, P23dx, P23dy, P23dz,...] clip. Clip = 0 denote reset lstm.

  shell   # files construction   h36m   |_gt # 2d and 3d annotations splited by actions   |_hg2dh36m # 2d estimation predicted by *Hourglass*, 'square' denotes prediction of square image.   |_ours_2d # 2d prediction from our model   |_ours_3d # 3d coarse prediction of *Model Extension: mask3d*   |_16skel_train_2d3d_clip.txt # train list of *Protocol I*   |_16skel_test_2d3d_clip.txt   |_16skel_train_2d3d_p3_clip.txt # train list of *Protocol III*   |_16skel_test_2d3d_p3_clip.txt   |_16point_mean_limb_scaled_max_min.csv #16 points normalize by (x-min) / (max-min)  

  After setting up Human3.6m dataset following its illustration and download the above training/testing list. You should update “root_folder” paths in CAFFE_ROOT/examples/.../*.prototxt for images and annotation director.

  • MPII

  We crop and square single person from  all images and update 2d annotation in train_h36m.txt (resort points according to order of Human3.6m points).

    mkdir data/MPII   cd data/MPII   wget -v https://drive.google.com/open?id=16gQJvf4wHLEconStLOh5Y7EzcnBUhoM-   tar -xzvf MPII_square.tar.gz   rm -f MPII_square.tar.gz  

 

Training

Offline Phase

Our model consists of two cascade modules, so the training phase can be divided into the following steps:

cd CAFFE_ROOT
  1. Pre-train the 2D pose sub-network with MPII. You can follow CPM or Hourglass or other 2D pose estimation method. We provide pretrained CPM-caffemodel. Please put it into CAFFE_ROOT/models/.

  2. Train 2D-to-3D pose transformer module with Human3.6M. And we fix the parameters of the 2D pose sub-network. The corresponding prototxt file is in examples/2D_to_3D/bilstm.prototxt.

       sh examples/2D_to_3D/train.sh    

  1. To train 3D-to-2D pose projector module, we fix the above module weights. And we need in the wild 2D Pose dataset to help training (we choose MPII).

   sh    sh examples/3D_to_2D/train.sh    

  1. Fine-tune the whole model jointly. We provide trained model and coarse prediction of Protocol I and Protocol III.

   sh    sh examples/finetune_whole/train.sh    

  1. Model extension: Add rand mask to relieve model bias. We provide corresponding model files in examples/mask3d.

   sh    sh examples/mask3d/train.sh    

Model Inference

3D-to-2D project module is initialized from the well-trained model, and they will be updated by minimizing the difference between the predicted 2D pose and projected 2D pose.

  shell   # Step1: Download the trained model   cd PROJECT_ROOT   mkdir models   cd models   wget -v https://drive.google.com/open?id=1dMuPuD_JdHuMIMapwE2DwgJ2IGK04xhQ   unzip model_extension_mask3d.zip   rm -r model_extension_mask3d.zip   cd ../     # Step2: save coarse 3D prediction   cd test   # change 'data_root' in test_human16.sh   # change 'root_folder' in template_16_merge.prototxt   # test_human16.sh [$1 deploy.prototxt] [$2 trained model] [$3 save dir] [$4 batchsize]   sh test_human16.sh . ../models/model_extension_mask3d/mask3d_iter_400000.caffemodel mask3d 5     # Step3: online refine 3D pose prediction   # protocal: 1/3 , default is 1   # pose2d: ours/hourglass/gt, default is ours   # coarse_3d: saved results in Sept2   python pred_v2.py --trained_model ../models/model_extension_mask3d/mask3d-400000.pkl --protocol 1 --data_dir /data/h36m/ --coarse_3d ../test/mask3d --save srr_results --pose2d hourglass  

 

  shell   # Maybe you want to predict 2d.   # The model we use to predict 2d pose is similar to our 3dpredict model without ssl module.   # Or you can use Hourglass(https://github.com/princeton-vl/pose-hg-demo) to predict 2d pose     # Step1.1: Download the trained merge model   cd PROJECT_ROOT   mkdir models && cd models   wget -v https://drive.google.com/open?id=19kTyttzUnm_1_7HEwoNKCXPP2QVo_zcK   unzip our2d.zip   rm -r our2d.zip   # move 2d prototxt to PROJECT_ROOT/test/   mv our2d/2d ../test/   cd ../     # Step1.2: save 2D prediction   cd test   # change 'data_root' in test_human16.sh   # change 'root_folder' in 2d/template_16_merge.prototxt   # test_human16.sh [$1 deploy.prototxt] [$2 trained model] [$3 save dir] [$4 batchsize]   sh test_human16.sh 2d/ ../models/our2d/2d_iter_800000.caffemodel our2d 5   # replace predict 2d pose in data dir or change data_dir in tensorflow/pred_v2.py   mv our2d /data/h36m/ours_2d/bilstm2d-p1-800000       # Step2 is same as above       # Step3: online refine 3D pose prediction   # protocal: 1/3 , default is 1   # pose2d: ours/hourglass/gt, default is ours   # coarse_3d: saved results in Sept2   python pred_v2.py --trained_model ../models/model_extension_mask3d/mask3d-400000.pkl --protocol 1 --data_dir /data/h36m/ --coarse_3d ../test/mask3d --save srr_results --pose2d ours  

 

  • Inference with yourself

  The only difference is that you should transfer caffemodel of 3D-to-2D project module to pkl file. We provide gen_refinepkl.py in tools/.

  sh   # Follow above Step1~2 to produce coarse 3d prediction and 2d pose.   # transfer caffemodel of SRR module to python .pkl file   python tools/gen_refinepkl.py CAFFE_ROOT CAFFEMODEL_DIR --pkl_dir model.pkl     # online refine 3D pose prediction   python pred_v2.py --trained_model model.pkl  

 

  • Evaluation

  shell   # Print MPJP   run tools/eval_h36m.m     # Visualization of 2dpose/ 3d gt pose/ 3d coarse pose/ 3d refine pose   # Please change data_root in visualization.m before running   run visualization.m  

Citation

@article{wang20193d,
  title={3D Human Pose Machines with Self-supervised Learning},
  author={Wang, Keze and Lin, Liang and Jiang, Chenhan and Qian, Chen and Wei, Pengxu},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2019},
  publisher={IEEE}
}
Owner
Chenhan Jiang
Chenhan Jiang
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
'A C2C E-COMMERCE TRUST MODEL BASED ON REPUTATION' Python implementation

Project description A library providing functionalities to calculate reputation and degree of trust on C2C ecommerce platforms. The work is fully base

Davide Bigotti 2 Dec 14, 2022
This is Unofficial Repo. Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection (CVPR 2021)

Lips Don't Lie: A Generalisable and Robust Approach to Face Forgery Detection This is a PyTorch implementation of the LipForensics paper. This is an U

Minha Kim 2 May 11, 2022
NEO: Non Equilibrium Sampling on the orbit of a deterministic transform

NEO: Non Equilibrium Sampling on the orbit of a deterministic transform Description of the code This repo describes the NEO estimator described in the

0 Dec 01, 2021
FinRL­-Meta: A Universe for Data­-Driven Financial Reinforcement Learning. 🔥

FinRL-Meta: A Universe of Market Environments. FinRL-Meta is a universe of market environments for data-driven financial reinforcement learning. Users

AI4Finance Foundation 543 Jan 08, 2023
RaceBERT -- A transformer based model to predict race and ethnicty from names

RaceBERT -- A transformer based model to predict race and ethnicty from names Installation pip install racebert Using a virtual environment is highly

Prasanna Parasurama 3 Nov 02, 2022
Code related to the manuscript "Averting A Crisis In Simulation-Based Inference"

Abstract We present extensive empirical evidence showing that current Bayesian simulation-based inference algorithms are inadequate for the falsificat

Montefiore Artificial Intelligence Research 3 Nov 14, 2022
Highway networks implemented in PyTorch.

PyTorch Highway Networks Highway networks implemented in PyTorch. Just the MNIST example from PyTorch hacked to work with Highway layers. Todo Make th

Conner Vercellino 56 Dec 14, 2022
Official Repo of my work for SREC Nandyal Machine Learning Bootcamp

About the Bootcamp A 3-day Machine Learning Bootcamp organised by Department of Electronics and Communication Engineering, Santhiram Engineering Colle

MS 1 Nov 29, 2021
A curated list of awesome Model-Based RL resources

Awesome Model-Based Reinforcement Learning This is a collection of research papers for model-based reinforcement learning (mbrl). And the repository w

OpenDILab 427 Jan 03, 2023
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"

Low-light Image Enhancement via Breaking Down the Darkness by Qiming Hu, Xiaojie Guo. 1. Dependencies Python3 PyTorch=1.0 OpenCV-Python, TensorboardX

Qiming Hu 30 Jan 01, 2023
This dlib-based facial login system

Facial-Login-System This dlib-based facial login system is a technology capable of matching a human face from a digital webcam frame capture against a

Mushahid Ali 3 Apr 23, 2022
Propose a principled and practically effective framework for unsupervised accuracy estimation and error detection tasks with theoretical analysis and state-of-the-art performance.

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles This project is for the paper: Detecting Errors and Estimating

Jiefeng Chen 13 Nov 21, 2022
Pytorch implementation of Nueral Style transfer

Nueral Style Transfer Pytorch implementation of Nueral style transfer algorithm , it is used to apply artistic styles to content images . Content is t

Abhinav 9 Oct 15, 2022
Semi-Autoregressive Transformer for Image Captioning

Semi-Autoregressive Transformer for Image Captioning Requirements Python 3.6 Pytorch 1.6 Prepare data Please use git clone --recurse-submodules to clo

YE Zhou 23 Dec 09, 2022
CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery

CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery This paper (CoANet) has been published in IEEE TIP 2021. This code i

Jie Mei 53 Dec 03, 2022
Neural style transfer as a class in PyTorch

pt-styletransfer Neural style transfer as a class in PyTorch Based on: https://github.com/alexis-jacq/Pytorch-Tutorials Adds: StyleTransferNet as a cl

Tyler Kvochick 31 Jun 27, 2022
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning

Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning This is the code for implementing the MADDPG algorithm presented in

97 Dec 21, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 04, 2022