A flexible and extensible framework for gait recognition.

Related tags

Deep LearningOpenGait
Overview

Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.

OpenGait

OpenGait is a flexible and extensible gait recognition project provided by the Shiqi Yu Group and supported in part by WATRIX.AI. Just the pre-beta version is released now, and more documentations as well as the reproduced methods will be offered as soon as possible.

Highlighted features:

  • Multiple Models Support: We reproduced several SOTA methods, and reached the same or even better performance.
  • DDP Support: The officially recommended Distributed Data Parallel (DDP) mode is used during the training and testing phases.
  • AMP Support: The Auto Mixed Precision (AMP) option is available.
  • Nice log: We use tensorboard and logging to log everything, which looks pretty.

Model Zoo

Model NM BG CL Configuration Input Size Inference Time Model Size
Baseline 96.3 92.2 77.6 baseline.yaml 64x44 12s 3.78M
GaitSet(AAAI2019) 95.8(95.0) 90.0(87.2) 75.4(70.4) gaitset.yaml 64x44 11s 2.59M
GaitPart(CVPR2020) 96.1(96.2) 90.7(91.5) 78.7(78.7) gaitpart.yaml 64x44 22s 1.20M
GLN*(ECCV2020) 96.1(95.6) 92.5(92.0) 80.4(77.2) gln_phase1.yaml, gln_phase2.yaml 128x88 14s 9.46M / 15.6214M
GaitGL(ICCV2021) 97.5(97.4) 95.1(94.5) 83.5(83.6) gaitgl.yaml 64x44 31s 3.10M

The results in the parentheses are mentioned in the papers

Note:

  • All the models were tested on CASIA-B ([email protected], excluding identical-view cases).
  • The shown result of GLN is implemented without compact block.
  • Only 2 RTX6000 are used during the inference phase.
  • The results on OUMVLP will be released soon. It's inference process just cost about 90 secs(Baseline & 8 RTX6000).

Get Started

Installation

  1. clone this repo.

    git clone https://github.com/ShiqiYu/OpenGait.git
    
  2. Install dependenices:

    • pytorch >= 1.6
    • torchvision
    • pyyaml
    • tensorboard
    • opencv-python
    • tqdm

    Install dependenices by Anaconda:

    conda install tqdm pyyaml tensorboard opencv
    conda install pytorch==1.6.0 torchvision -c pytorch
    

    Or, Install dependenices by pip:

    pip install tqdm pyyaml tensorboard opencv-python
    pip install torch==1.6.0 torchvision==0.7.0
    

Prepare dataset

See prepare dataset.

Train

Train a model by

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase train
  • python -m torch.distributed.launch Our implementation uses DistributedDataParallel.
  • --nproc_per_node The number of gpu to use, it must equal the length of CUDA_VISIBLE_DEVICES.
  • --cfgs The path of config file.
  • --phase Specified as train.
  • --iter You can specify a number of iterations or use restore_hint in the configuration file and resume training from there.
  • --log_to_file If specified, log will be written on disk simultaneously.

You can run commands in train.sh for training different models.

Test

Use trained model to evaluate by

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase test
  • --phase Specified as test.
  • --iter You can specify a number of iterations or or use restore_hint in the configuration file and restore model from there.

Tip: Other arguments are the same as train phase.

You can run commands in test.sh for testing different models.

Customize

If you want customize your own model, see here.

Warning

  • Some models may not be compatible with AMP, you can disable it by setting enable_float16 False.
  • In DDP mode, zombie processes may occur when the program terminates abnormally. You can use this command kill $(ps aux | grep main.py | grep -v grep | awk '{print $2}') to clear them.
  • We implemented the functionality of testing while training, but it slightly affected the results. None of our published models use this functionality. You can disable it by setting with_test False.

Authors:

Open Gait Team (OGT)

Acknowledgement

Comments
  • 关于GLN网络的实现细节

    关于GLN网络的实现细节

    Detailed description

    我看论文中在lateral connections中采用的是1X1卷积,但是看代码中实现的是3X3 kernel大小。是否这样实现的性能更高,就是单纯的求问一下哈

    Then at each stage, a 1 × 1 convolutional layer is taken to rearrange the features and adjust the channel dimension

    lateral_layer 代码

    https://github.com/ShiqiYu/OpenGait/blob/6a469fcd8d0fbe23262ec3d751cf74ec341f17b6/lib/modeling/models/gln.py#L54-L59

    opened by luwanglin 12
  • Unable to find a valid cuDNN algorithm to run convolution

    Unable to find a valid cuDNN algorithm to run convolution

    用笔记本测试的,只有一块显卡所以用的CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 lib/main.py --cfgs ./config/gaitset.yaml --phase train。报错Traceback (most recent call last): File "lib/main.py", line 66, in run_model(cfgs, training) File "lib/main.py", line 49, in run_model Model.run_train(model) File "/home/yq/OpenGait/lib/modeling/base_model.py", line 371, in run_train model.train_step(loss_sum) File "/home/yq/OpenGait/lib/modeling/base_model.py", line 307, in train_step self.Scaler.scale(loss_sum).backward() File "/home/yq/anaconda3/envs/pt14/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/yq/anaconda3/envs/pt14/lib/python3.7/site-packages/torch/autograd/init.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Unable to find a valid cuDNN algorithm to run convolution Exception raised from try_all at /pytorch/aten/src/ATen/native/cudnn/Conv.cpp:692 (most recent call first): 请问是怎么回事啊

    documentation 
    opened by scyzero 10
  • Some confusion about GaitEdge forward

    Some confusion about GaitEdge forward

    In the inference of the GaitEdge , why it still need silhouette as input ? Can't it get silhouette from the output of the Segmentation U-Net? Also ,I don't understand why it need to input ratios . Isn't the end to end network just an RGB image input?

    def forward(self, inputs):
            ipts, labs, _, _, seqL = inputs
    
            ratios = ipts[0]
            rgbs = ipts[1]
            sils = ipts[2]
    
            n, s, c, h, w = rgbs.size()
            rgbs = rgbs.view(n*s, c, h, w)
            sils = sils.view(n*s, 1, h, w)
            logis = self.Backbone(rgbs)  # [n, s, c, h, w]
            logits = torch.sigmoid(logis)
            mask = torch.round(logits).float()
            if self.is_edge:
                edge_mask, eroded_mask = self.preprocess(sils)
    
                # Gait Synthesis
                new_logits = edge_mask*logits+eroded_mask*sils
    
                if self.align:
                    cropped_logits = self.gait_align(
                        new_logits, sils, ratios)
                else:
                    cropped_logits = self.resize(new_logits)
            else:
                if self.align:
                    cropped_logits = self.gait_align(
                        logits, mask, ratios)
                else:
                    cropped_logits = self.resize(logits)
            _, c, H, W = cropped_logits.size()
            cropped_logits = cropped_logits.view(n, s, H, W)
            retval = super(GaitEdge, self).forward(
                [[cropped_logits], labs, None, None, seqL])
            retval['training_feat']['bce'] = {'logits': logits, 'labels': sils}
            retval['visual_summary']['image/roi'] = cropped_logits.view(
                n*s, 1, H, W)
    
            return retval
    
    opened by enemy1205 9
  • Can the code run in Windows 11 anaconda environment?

    Can the code run in Windows 11 anaconda environment?

    System information (version)

    • Pytorch = 1.6.0
    • Operating System / Platform = Windows 11
    • Cuda = 10.1.243

    Detailed description

    The problem shows when I run into my computer with the above system information. I create a new conda env and clone this code in the env. But I cannot run the code and the problems are below: (opengait) C:\Users\Lenovo\OpenGait>set CUDA_VISIBLE_DEVICES=1 & python -m torch.distributed.launch --nproc_per_node=2 lib/main.py --cfgs ./config/baseline.yaml --phase train


    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


    Traceback (most recent call last): Traceback (most recent call last): File "lib/main.py", line 58, in File "lib/main.py", line 58, in torch.distributed.init_process_group('nccl', init_method='env://') torch.distributed.init_process_group('nccl', init_method='env://') AttributeError: module 'torch.distributed' has no attribute 'init_process_group' AttributeError: module 'torch.distributed' has no attribute 'init_process_group' Traceback (most recent call last): File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\site-packages\torch\distributed\launch.py", line 261, in main() File "C:\Users\Lenovo\anaconda3\envs\opengait\lib\site-packages\torch\distributed\launch.py", line 256, in main raise subprocess.CalledProcessError(returncode=process.returncode, subprocess.CalledProcessError: Command '['C:\Users\Lenovo\anaconda3\envs\opengait\python.exe', '-u', 'lib/main.py', '--local_rank=1', '--cfgs', './config/baseline.yaml', '--phase', 'train']' returned non-zero exit status 1.

    opened by robot-droid 9
  • 使用baseline模型进行测试时报错

    使用baseline模型进行测试时报错

    System information (version)

    Detailed description

    [2022-02-17 12:42:57] [INFO]: {'dataset_name': 'CASIA-B', 'dataset_root': './dataset_output', 'num_workers': 1, 'dataset_partition': './misc/partitions/CASIA-B_include_005.json', 'remove_no_gallery': False, 'cache': False, 'test_dataset_name': 'CASIA-B'} [2022-02-17 12:42:57] [INFO]: -------- Test Pid List -------- [2022-02-17 12:42:57] [INFO]: ['075'] [2022-02-17 12:43:00] [INFO]: Restore Parameters from output/CASIA-B/Baseline/Baseline/checkpoints/Baseline-60000.pt !!! [2022-02-17 12:43:00] [INFO]: Parameters Count: 3.77914M [2022-02-17 12:43:00] [INFO]: Model Initialization Finished! Transforming: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 110/110 [00:01<00:00, 76.67it/s] Traceback (most recent call last): File "lib/main.py", line 70, in run_model(cfgs, training) File "lib/main.py", line 55, in run_model Model.run_test(model) File "/home/lalaland/PycharmProjects/OpenGait_/lib/modeling/base_model.py", line 470, in run_test return eval_func(info_dict, dataset_name, **valid_args) File "/home/lalaland/PycharmProjects/OpenGait_/lib/utils/evaluation.py", line 79, in identification 0) * 100 / dist.shape[0], 2) ValueError: could not broadcast input array from shape (4,) into shape (5,) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 10590) of binary: /home/lalaland/anaconda3/envs/torch/bin/python Traceback (most recent call last): File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in main() File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/lalaland/anaconda3/envs/torch/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    lib/main.py FAILED

    Failures: <NO_OTHER_FAILURES>

    Root Cause (first observed failure): [0]: time : 2022-02-17_12:43:06 host : lalaland-Predator-PH317-55 rank : 0 (local_rank: 0) exitcode : 1 (pid: 10590) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    question 
    opened by SerJamie 8
  • 这个函数的作用是什么呢?

    这个函数的作用是什么呢?

    https://github.com/ShiqiYu/OpenGait/blob/49cbc44069878210e11117697be507921db1e3fa/lib/modeling/modules.py#L34 这个函数的最后 return x.view(*_)是不是相当于 return x.view(n, s, c, h ,w)? 如果这么说的话,这个函数没有改变什么?到底发挥了什么作用?没有看懂,望指点

    question 
    opened by HiAleeYang 8
  • training GaitGL on OUMVLP with default config too slow  to tolerate.....

    training GaitGL on OUMVLP with default config too slow to tolerate.....

    I try GaitGL on OUMVLP dataset, 6000 id for training and rest for teting, using deafault config with Tesla v100*8, it's cost 165 seconds for each 100 iters. if I want 10 epochs, time = 10(epoch) x 130000(seqs) / 8(bs) / 100 * 165 = 3.1 days.

    I guess computing triplet loss cost might cost lot of time, I'm tring to modify the train procedure to, triplet loss on take effect at latter epochs, while early epochs only compute id loss, namely softmaxloss.

    Is there any other methods to speed up?

    thxs

    opened by silverlining21 8
  • GREW pretreatment `to_pickle` has size 0

    GREW pretreatment `to_pickle` has size 0

    System information (version)

    • Pytorch => 1.11
    • Operating System / Platform => Ubuntu 20.04
    • Cuda => 11.3

    Detailed description

    I'm trying to run GREW pretreatment code but it generates no GREW-pkl folder at the end of the process. I debugged myself and checked if the --dataset flag is set properly and the to_pickle list size before saving the pickle file. The flag is well set but the size of the list is always 0.

    I downloaded the GREW dataset from the link you guys sent me and made de GREW-rearranged folder using the code provided. I'll keep investigating what is causing such an error and if I find I'll set a fixing PR.

    Issue submission checklist

    • [x] I checked the problem with documentation, FAQ, issues, and have not found solution.
    no-issue-activity 
    opened by gosiqueira 7
  • what dose the 'SeqL' mean?

    what dose the 'SeqL' mean?

    Hi, https://github.com/ShiqiYu/OpenGait/blob/49cbc44069878210e11117697be507921db1e3fa/lib/modeling/modules.py#L61 I guess it means the length of the sequence .What is theseqL[0] means? By the way, what is the shape of the seqL? What does the each dimension of the seqL correspond to?

    opened by aleeyang 7
  • 如何使用Baseline-60000.pt进行测试

    如何使用Baseline-60000.pt进行测试

    我的运行命令没有反应 CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 lib/main.py --cfgs ./config/baseline.yaml --phase train 请问baseline.yaml该怎么配置

    question 
    opened by w229850 7
  • How to fix this problem when i want to test

    How to fix this problem when i want to test

    System information (version)

    • Pytorch => :grey_question:
    • Operating System / Platform => :grey_question:
    • Cuda => :grey_question:

    Detailed description

    image

    Steps to reproduce

    Issue submission checklist

    • [ ] I checked the problem with documentation, FAQ, issues, and have not found solution.
    opened by Flyooofly 6
  • Reconstruct LossAggregator and fix some typos in config files

    Reconstruct LossAggregator and fix some typos in config files

    1. new config files brought by Gait3D have some typos
    2. The old LossAggregator is a basic python class that cannot register parameters to the base_model. It is inconvenient to train models with those losses that require parameters, such as center loss.
      Therefore, it seems a better way that registers LossAggregator as an nn.Module and uses nn.ModuleDict to organize all losses. In this new version, LossAggregator can be indexed like a regular Python dictionary (keep it the same as the current version), but the modules it contains are properly registered and will be visible by all Module methods. All parameters registered in losses can be accessed by the method 'self.parameters()', enabling they can be trained properly.
    opened by wj1tr0y 1
  • ValueError: The input size of each GPU must be 1 in testing mode

    ValueError: The input size of each GPU must be 1 in testing mode

    After training GaitGL for 80000 epochs, I test GaitGL by running "CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=1 opengait/main.py --cfgs ./config/gaitgl/gaitgl.yaml --phase test", but get "ValueError: The input size of each GPU must be 1 in testing mode, but got 4!" image

    opened by zhang123-sys 5
  • What is the amount of computation and parameters of the GaitEdge?

    What is the amount of computation and parameters of the GaitEdge?

    System information (version)

    • Pytorch => :grey_question:
    • Operating System / Platform => :grey_question:
    • Cuda => :grey_question:

    Detailed description

    Steps to reproduce

    Issue submission checklist

    • [ ] I checked the problem with documentation, FAQ, issues, and have not found solution.
    opened by puyiwen 0
Releases(v1.1)
Owner
Shiqi Yu
Associate Professor, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
Shiqi Yu
Roach: End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

CARLA-Roach This is the official code release of the paper End-to-End Urban Driving by Imitating a Reinforcement Learning Coach by Zhejun Zhang, Alexa

Zhejun Zhang 118 Dec 28, 2022
SIEM Logstash parsing for more than hundred technologies

LogIndexer Pipeline Logstash Parsing Configurations for Elastisearch SIEM and OpenDistro for Elasticsearch SIEM Why this project exists The overhead o

146 Dec 29, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks Recent Update 2021.11.23: We release the source code of SAQ. Setup the environments Clone the re

Zhuang AI Group 30 Dec 19, 2022
Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation (CVPR 2020)

Super-BPD for Fast Image Segmentation (CVPR 2020) Introduction We propose direction-based super-BPD, an alternative to superpixel, for fast generic im

189 Dec 07, 2022
This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".

ResT By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the official implement

zhql 222 Dec 13, 2022
A toolkit for document-level event extraction, containing some SOTA model implementations

❤️ A Toolkit for Document-level Event Extraction with & without Triggers Hi, there 👋 . Thanks for your stay in this repo. This project aims at buildi

Tong Zhu(朱桐) 159 Dec 22, 2022
DeepCAD: A Deep Generative Network for Computer-Aided Design Models

DeepCAD This repository provides source code for our paper: DeepCAD: A Deep Generative Network for Computer-Aided Design Models Rundi Wu, Chang Xiao,

Rundi Wu 85 Dec 31, 2022
Deep Multimodal Neural Architecture Search

MMNas: Deep Multimodal Neural Architecture Search This repository corresponds to the PyTorch implementation of the MMnas for visual question answering

Vision and Language Group@ MIL 23 Dec 21, 2022
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

BCMI 49 Jul 27, 2022
PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision

Learning to Generate Grounded Visual Captions without Localization Supervision This is the PyTorch implementation of our paper: Learning to Generate G

Chih-Yao Ma 41 Nov 17, 2022
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
Multi-agent reinforcement learning algorithm and environment

Multi-agent reinforcement learning algorithm and environment [en/cn] Pytorch implements multi-agent reinforcement learning algorithms including IQL, Q

万鲲鹏 7 Sep 20, 2022
Deep Learning for Human Part Discovery in Images - Chainer implementation

Deep Learning for Human Part Discovery in Images - Chainer implementation NOTE: This is not official implementation. Original paper is Deep Learning f

Shintaro Shiba 63 Sep 25, 2022
AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

NeurIPS 2021 Paper "Learning with Algorithmic Supervision via Continuous Relaxations"

Felix Petersen 76 Jan 01, 2023
Implementation of "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner"

Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner This repository is the official implementation of Meta-rPPG: Remote Heart Ra

Eugene Lee 137 Dec 13, 2022
PyTorch implementation of the ideas presented in the paper Interaction Grounded Learning (IGL)

Interaction Grounded Learning This repository contains a simple PyTorch implementation of the ideas presented in the paper Interaction Grounded Learni

Arthur Juliani 4 Aug 31, 2022
Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

Time2box Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

LingCai 4 Aug 23, 2022
Deploy pytorch classification model using Flask and Streamlit

Deploy pytorch classification model using Flask and Streamlit

Ben Seo 1 Nov 17, 2021
Artificial Neural network regression model to predict the energy output in a combined cycle power plant.

Energy_Output_Predictor Artificial Neural network regression model to predict the energy output in a combined cycle power plant. Abstract Energy outpu

1 Feb 11, 2022