This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Overview

Gait3D-Benchmark

This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)". The official project page is here.

What's New

  • [Mar 2022] Another gait in the wild dataset GREW is supported.
  • [Mar 2022] Our Gait3D dataset and SMPLGait method are released.

Model Zoo

Gait3D

Input Size: 128x88(64x44)

Method [email protected] [email protected] mAP mINP download
GaitSet(AAAI2019)) 42.60(36.70) 63.10(58.30) 33.69(30.01) 19.69(17.30) model-128(model-64)
GaitPart(CVPR2020) 29.90(28.20) 50.60(47.60) 23.34(21.58) 13.15(12.36) model-128(model-64)
GLN(ECCV2020) 42.20(31.40) 64.50(52.90) 33.14(24.74) 19.56(13.58) model-128(model-64)
GaitGL(ICCV2021) 23.50(29.70) 38.50(48.50) 16.40(22.29) 9.20(13.26) model-128(model-64)
OpenGait Baseline* 47.70(42.90) 67.20(63.90) 37.62(35.19) 22.24(20.83) model-128(model-64)
SMPLGait(CVPR2022) 53.20(46.30) 71.00(64.50) 42.43(37.16) 25.97(22.23) model-128(model-64)

*It should be noticed that OpenGait Baseline is equal to SMPLGait w/o 3D in our paper.

Cross Domain

Datasets in the Wild (GaitSet, 64x44)

Source Target [email protected] [email protected] mAP
GREW (official split) Gait3D 15.80 30.20 11.83
GREW (our split) 16.50 31.10 11.71
Gait3D GREW (official split) 18.81 32.25 ~
GREW (our split) 43.86 60.89 28.06

Requirements

  • pytorch >= 1.6
  • torchvision
  • pyyaml
  • tensorboard
  • opencv-python
  • tqdm
  • py7zr
  • tabulate
  • termcolor

Installation

You can replace the second command from the bottom to install pytorch based on your CUDA version.

git clone https://github.com/Gait3D/Gait3D-Benchmark.git
cd Gait3D-Benchmark
conda create --name py37torch160 python=3.7
conda activate py37torch160
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
pip install tqdm pyyaml tensorboard opencv-python tqdm py7zr tabulate termcolor

Data Preparation

Please download the Gait3D dataset by signing an agreement. We ask for your information only to make sure the dataset is used for non-commercial purposes. We will not give it to any third party or publish it publicly anywhere.

Data Pretreatment

Run the following command to preprocess the Gait3D dataset.

python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-64-44-pkl' --img_h 64 --img_w 44
python misc/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-128-88-pkl' --img_h 128 --img_w 88
python misc/pretreatment_smpl.py --input_path 'Gait3D/3D_SMPLs' --output_path 'Gait3D-smpls-pkl'

Data Structrue

After the pretreatment, the data structure under the directory should like this

├── Gait3D-sils-64-44-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-sils-128-88-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl
├── Gait3D-smpls-pkl
│  ├── 0000
│     ├── camid0_videoid2
│        ├── seq0
│           └──seq0.pkl

Train

Run the following command:

sh train.sh

Test

Run the following command:

sh test.sh

Citation

Please cite this paper in your publications if it helps your research:

@inproceedings{zheng2022gait3d,
  title={Gait Recognition in the Wild with Dense 3D Representations and A Benchmark},
  author={Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Acknowledgement

Here are some great resources we benefit:

  • The codebase is based on OpenGait.
  • The 3D SMPL data is obtained by ROMP.
  • The 2D Silhouette data is obtained by HRNet-segmentation.
  • The 2D pose data is obtained by HRNet.
  • The ReID featrue used to make Gait3D is obtained by FastReID.
Comments
  • lib/modeling/models/smplgait.py throwing error when training a new dataset

    lib/modeling/models/smplgait.py throwing error when training a new dataset

    Hi Jinkai,

    When I try to use the SMPLGait to apply on other dataset, during the training process, the smplgait.py throws the error that: smpls = ipts[1][0] # [n, s, d] IndexError: list index out of range It is also interesting that I used 4 GPUs in the training. 3 of them could detect the the ipts[1][0] tensor with size 1. However, the fourth one failed to do so. Could I know how I can solve this?

    opened by zhiyuann 7
  • I have a few questions about Gait3D-Benchmark Datasets

    I have a few questions about Gait3D-Benchmark Datasets

    Hi. Im jjun. I read your paper impressively.

    We don't currently live in China, so it is difficult to use dataset on baidu disk.

    If you don't mind, is there a way to download the dataset to another disk (e.g Google drive)?

    opened by jjunnii 6
  • Question about 3D SMPL skeleton topology diagram

    Question about 3D SMPL skeleton topology diagram

    Your work promotes the application of gait recognition in real scenes, can you provide the topology diagram of the SMPL 3D skeleton in Gait3D? Because the specific meaning of the 24 joint points is not stated in your data description document.

    opened by HL-HYX 4
  • ROMP SMPL transfer

    ROMP SMPL transfer

    When I try to use the ROMP to generate out the 3D mesh, I detect there is a version conflict with the ROMP used by SMPLGait. Could I know which version of the ROMP the SMPLGait used? In this way I could use the SMPLGait to run on other ReID dataset.

    opened by zhiyuann 3
  • question about iteration and epoch

    question about iteration and epoch

    Hi! The total iteration in your code is set to 180000, and you report the total epoch as 1200 in your paper. What's the relationship between iteration and epoch?

    opened by yan811 2
  • About data generation

    About data generation

    Hi! I 'd like to know some details about data generation in NPZ files.

    In npz file: 1 What's the order of "pose"? SMPL pose parameter should be [24,3] dim, how did you convert it to [72,]? The order is [keypoint1_angel1, keypoint1_angle2, keypoint1_angle3, keypoint2_angel1, keypoint2_angle2, keypoint2_angle3...] or [keypoint1_angle1, keypoint2_angle1... keypoint1_angle2, keypoint2_angle2... keypoint1_angle3, keypoint1_angle3... ] ?

    2 How did you generate pose into SMPL format,SPIN format , and OpenPose format? What's the order of the second dim? Is the keypoint order the same with SMPL model?

    3 In pkl file: For example, dim of data in './0000/camid0_videoid2/seq0/seq0.pkl' is [48,85]. What's the order of dim 1? Is it ordered by time order or shuffled?

    opened by yan811 2
  • GREW pretreatment `to_pickle` has size 0

    GREW pretreatment `to_pickle` has size 0

    I'm trying to run GREW pretreatment code but it generates no GREW-pkl folder at the end of the process. I debugged myself and checked if the --dataset flag is set properly and the to_pickle list size before saving the pickle file. The flag is well set but the size of the list is always 0.

    I downloaded the GREW dataset from the link you guys sent me and made de GREW-rearranged folder using the code provided. I'll keep investigating what is causing such an error and if I find I'll set a fixing PR.

    opened by gosiqueira 1
  • About the pose data

    About the pose data

    Can you make a detailed description of the pose data? This is the path of one frame pose and the corresponding content of the txt file Gait3D/2D_Poses/0000/camid9_videoid2/seq0/human_crop_f17279.txt '311,438,89.201164,62.87694,0.57074964,89.201164,54.322254,0.47146344,84.92382,62.87694,0.63443935,42.150383....' I have 3 questions. Q1: what does 'f17279' means? Q2: what does the first number (e.g. 311) in the txt file mean? Q3: which number('f17279' or '311') should I regard as a base when I order the sequence? Thank you very much!

    opened by HiAleeYang 0
Owner
Official repo for Gait3D dataset
Analysis of rationale selection in neural rationale models

Neural Rationale Interpretability Analysis We analyze the neural rationale models proposed by Lei et al. (2016) and Bastings et al. (2019), as impleme

Yiming Zheng 3 Aug 31, 2022
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

Xinyu Huang 28 Oct 27, 2022
EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness

EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness Improving GAN Equilibrium by Raising Spatial Awareness Jianyuan Wang, Ceyuan Yang, Ying

GenForce: May Generative Force Be with You 149 Dec 19, 2022
Graph Transformer Architecture. Source code for

Graph Transformer Architecture Source code for the paper "A Generalization of Transformer Networks to Graphs" by Vijay Prakash Dwivedi and Xavier Bres

NTU Graph Deep Learning Lab 561 Jan 08, 2023
Virtual Dance Reality Stage: a feature that offers you to share a stage with another user virtually

Portrait Segmentation using Tensorflow This script removes the background from an input image. You can read more about segmentation here Setup The scr

291 Dec 24, 2022
PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

PSANet: Point-wise Spatial Attention Network for Scene Parsing (in construction) by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Lo

Hengshuang Zhao 217 Oct 30, 2022
This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). This codebase is implemented using JAX, buildin

naruya 132 Nov 21, 2022
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation

Official Code Implementation of The Paper : XAI for Transformers: Better Explanations through Conservative Propagation For the SST-2 and IMDB expermin

Ameen Ali 23 Dec 30, 2022
Adversarial vulnerability of powerful near out-of-distribution detection

Adversarial vulnerability of powerful near out-of-distribution detection by Stanislav Fort In this repository we're collecting replications for the ke

Stanislav Fort 9 Aug 30, 2022
An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches

Transformer-in-Transformer An Implementation of the Transformer in Transformer paper by Han et al. for image classification, attention inside local pa

Rishit Dagli 40 Jul 25, 2022
Deep Q-learning for playing chrome dino game

[PYTORCH] Deep Q-learning for playing Chrome Dino

Viet Nguyen 68 Dec 05, 2022
Starter Code for VALUE benchmark

StarterCode for VALUE Benchmark This is the starter code for VALUE Benchmark [website], [paper]. This repository currently supports all baseline model

VALUE Benchmark 73 Dec 09, 2022
Source code for ZePHyR: Zero-shot Pose Hypothesis Rating @ ICRA 2021

ZePHyR: Zero-shot Pose Hypothesis Rating ZePHyR is a zero-shot 6D object pose estimation pipeline. The core is a learned scoring function that compare

R-Pad - Robots Perceiving and Doing 18 Aug 22, 2022
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 04, 2022
Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

Yuxin Zhang 27 Jun 28, 2022
This is the official repository of the paper Stocastic bandits with groups of similar arms (NeurIPS 2021). It contains the code that was used to compute the figures and experiments of the paper.

Experiments How to reproduce experimental results of Stochastic bandits with groups of similar arms submitted paper ? Section 5 of the paper To reprod

Fabien 0 Oct 25, 2021
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Beanie - is an asynchronous ODM for MongoDB, based on Motor and Pydantic. It uses an abstraction over Pydantic models and Motor collections to work wi

295 Dec 29, 2022
AOT-GAN for High-Resolution Image Inpainting (codebase for image inpainting)

AOT-GAN for High-Resolution Image Inpainting Arxiv Paper | AOT-GAN: Aggregated Contextual Transformations for High-Resolution Image Inpainting Yanhong

Multimedia Research 214 Jan 03, 2023
Pytorch implementation of NeurIPS 2021 paper: Geometry Processing with Neural Fields.

Geometry Processing with Neural Fields Pytorch implementation for the NeurIPS 2021 paper: Geometry Processing with Neural Fields Guandao Yang, Serge B

Guandao Yang 162 Dec 16, 2022
A Python wrapper for Google Tesseract

Python Tesseract Python-tesseract is an optical character recognition (OCR) tool for python. That is, it will recognize and "read" the text embedded i

Matthias A Lee 4.6k Jan 05, 2023