This is Official implementation for "Pose-guided Feature Disentangling for Occluded Person Re-Identification Based on Transformer" in AAAI2022

Related tags

Deep LearningPFD_Net
Overview

PFD:Pose-guided Feature Disentangling for Occluded Person Re-identification based on Transformer

Python >=3.6 PyTorch >=1.6

This repo is the official implementation of "Pose-guided Feature Disentangling for Occluded Person Re-identification based on Transformer(PFD), Tao Wang, Hong Liu, Pinghao Song, Tianyu Guo& Wei Shi" in PyTorch.

Pipeline

framework

Dependencies

  • timm==0.3.2

  • torch==1.6.0

  • numpy==1.20.2

  • yacs==0.1.8

  • opencv_python==4.5.2.54

  • torchvision==0.7.0

  • Pillow==8.4.0

Installation

pip install -r requirements.txt

If you find some packages are missing, please install them manually.

Prepare Datasets

mkdir data

Please download the dataset, and then rename and unzip them under the data

data
|--market1501
|
|--Occluded_Duke
|
|--Occluded_REID
|
|--MSMT17
|
|--dukemtmcreid

Prepare ViT Pre-trained and HRNet Pre-trained Models

mkdir data

The ViT Pre-trained model can be found in ViT_Base, The HRNet Pre-trained model can be found in HRNet, please download it and put in the './weights' dictory.

Training

We use One GeForce GTX 1080Ti GPU for Training Before train the model, please modify the parameters in config file, please refer to Arguments in TransReID

python occ_train.py --config_file {config_file path}
#example
python occ_train.py --config_file 'configs/OCC_Duke/skeleton_pfd.yml'

Test the model

First download the Occluded-Duke model:Occluded-Duke

To test on pretrained model on Occ-Duke: Modify the pre-trained model path (PRETRAIN_PATH:ViT_Base, POSE_WEIGHT:HRNet, WEIGHT:Occluded-Duke) in yml, and then run:

## OccDuke for example
python test.py --config_file 'configs/OCC_Duke/skeleton_pfd.yml'

Occluded-Duke Results

Model Image Size Rank-1 mAP
HOReID 256*128 55.1 43.8
PAT 256*128 64.5 53.6
TransReID 256*128 64.2 55.7
PFD 256*128 67.7 60.1
TransReID* 256*128 66.4 59.2
PFD* 256*128 69.5 61.8

$*$means the encoder is with a small step sliding-window setting

Occluded-REID Results

Model Image Size Rank-1 mAP
HOReID 256*128 80.3 70.2
PAT 256*128 81.6 72.1
PFD 256*128 79.8 81.3

Market-1501 Results

Model Image Size Rank-1 mAP
HOReID 256*128 80.3 70.2
PAT 256*128 95.4 88.0
TransReID 256*128 95.4 88.0
PFD 256*128 95.5 89.6

Citation

If you find our work useful in your research, please consider citing this paper! (preprint version will be available soon)

@inproceedings{wang2022pfd,
  Title= {Pose-guided Feature Disentangling for Occluded Person Re-identification based on Transformer},
  Author= {Tao Wang, Hong Liu, Pinhao Song, Tianyu Guo and Wei Shi},
  Booktitle= {AAAI},
  Year= {2022}
}

Acknowledgement

Our code is extended from the following repositories. We thank the authors for releasing the codes.

License

This project is licensed under the terms of the MIT license.

You might also like...
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Official implementation of our paper
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

SuperGAT Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighbor

An official implementation of
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Official code implementation for
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

StyleGAN2 - Official TensorFlow Implementation
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

 Old Photo Restoration (Official PyTorch Implementation)
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Comments
  • 精度达不到论文里面的数据

    精度达不到论文里面的数据

    作者您好,我在1501上测试了一下 就改了 /home/zqx_3090/PersonReID/PersonReID2/PFD_Net-master/configs/Market1501/skeleton_pfd.yml 这个文件,里面的参数并没有改动 改了权重的路径,和文件夹的路径 其他都没变,如何训练300轮次后 我选择最高300轮的 /home/zqx_3090/PersonReID/PersonReID2/PFD_Net-master/logs/Market/pfd_net/skeleton_transformer_300.pth 去测试 结果是 : 2021-12-28 18:23:39,417 PFDreid.test INFO: Validation Results 2021-12-28 18:23:39,417 PFDreid.test INFO: mAP: 88.2% 2021-12-28 18:23:39,418 PFDreid.test INFO: CMC curve, Rank-1 :94.8% 2021-12-28 18:23:39,418 PFDreid.test INFO: CMC curve, Rank-5 :98.3% 2021-12-28 18:23:39,418 PFDreid.test INFO: CMC curve, Rank-10 :99.0% 达不到论文的95.5 甚至不如TransReID的精度 ??? 您能看看是为什么嘛?

    MODEL: PRETRAIN_CHOICE: 'imagenet' PRETRAIN_PATH: '/home/zqx_3090/PersonReID/PersonReID2/PFD_Net-master/weights/jx_vit_base_p16_224-80ecf9dd.pth' METRIC_LOSS_TYPE: 'triplet' IF_LABELSMOOTH: 'on' IF_WITH_CENTER: 'no' NAME: 'skeleton_transformer' NO_MARGIN: True DEVICE_ID: ('2') TRANSFORMER_TYPE: 'vit_base_patch16_224_TransReID' STRIDE_SIZE: [16, 16]

    SIE_CAMERA: True SIE_COE: 3.0 JPM: True RE_ARRANGE: True NUM_HEAD: 8 DECODER_DROP_RATE: 0.1 DROP_FIRST: False NUM_DECODER_LAYER: 6 QUERY_NUM: 17 POSE_WEIGHT: '/home/zqx_3090/PersonReID/PersonReID2/PFD_Net-master/weights/pose_hrnet_w48_384x288.pth' SKT_THRES: 0.2

    INPUT: SIZE_TRAIN: [256, 128] SIZE_TEST: [256, 128] PROB: 0.5 # random horizontal flip RE_PROB: 0.5 # random erasing PADDING: 10 PIXEL_MEAN: [0.5, 0.5, 0.5] PIXEL_STD: [0.5, 0.5, 0.5]

    DATASETS: NAMES: ('market1501') ROOT_DIR: ('/home/zqx_3090/PersonReID/PersonReID2/PFD_Net-master/data/')

    DATALOADER: SAMPLER: 'softmax_triplet' NUM_INSTANCE: 4 NUM_WORKERS: 8

    SOLVER: OPTIMIZER_NAME: 'SGD' MAX_EPOCHS: 300 BASE_LR: 0.008 IMS_PER_BATCH: 64 WARMUP_METHOD: 'linear' LARGE_FC_LR: False CHECKPOINT_PERIOD: 60 LOG_PERIOD: 50 EVAL_PERIOD: 30 WEIGHT_DECAY: 1e-4 WEIGHT_DECAY_BIAS: 1e-4 BIAS_LR_FACTOR: 2

    TEST: EVAL: True IMS_PER_BATCH: 256 RE_RANKING: False WEIGHT: "/home/zqx_3090/PersonReID/PersonReID2/PFD_Net-master/logs/Market/pfd_net/skeleton_transformer_300.pth" #put your own pth NECK_FEAT: 'before' FEAT_NORM: 'yes'

    OUTPUT_DIR: 'logs/Market/pfd_net'

    opened by zqx951102 3
  • 使用您的Occluded-Duke的预训练模型达不到文中的结果

    使用您的Occluded-Duke的预训练模型达不到文中的结果

    作者您好: 感谢你做出如此优秀的工作,我按照reademe的要求在使用您的Occluded-Duke的预训练模型时,发现达不到文中所说的结果,下图是我测试的结果: image 跟论文中的结果大约相差2%,我使用的时pytorch1.7.1, cuda10.2, python3.7.13;所以我想知道这是什么原因造成的呢? 期待您的回复。

    opened by changshuowang 2
  • There is no Occlude-REID data loader

    There is no Occlude-REID data loader

    Good work! I respect your contributions!

    I want to testing Occluded-REID dataset in your code, but there is no loader. In your code, dataset.make_dataloader.py, line 14 "from .occ_reid import Occluded_REID"

    Would you share this code?

    thank you

    opened by intlabSeJun 4
Releases(V1.0.0)
Owner
Tao Wang
Tao Wang
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i

labml.ai 16.4k Jan 09, 2023
Offline Reinforcement Learning with Implicit Q-Learning

Offline Reinforcement Learning with Implicit Q-Learning This repository contains the official implementation of Offline Reinforcement Learning with Im

Ilya Kostrikov 125 Dec 31, 2022
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, Björn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

130 Dec 25, 2022
YuNetのPythonでのONNX、TensorFlow-Lite推論サンプル

YuNet-ONNX-TFLite-Sample YuNetのPythonでのONNX、TensorFlow-Lite推論サンプルです。 TensorFlow-LiteモデルはPINTO0309/PINTO_model_zoo/144_YuNetのものを使用しています。 Requirement Op

KazuhitoTakahashi 8 Nov 17, 2021
A simple root calculater for python

Root A simple root calculater Usage/Examples python3 root.py 9 3 4 # Order: number - grid - number of decimals # Output: 2.08

Reza Hosseinzadeh 5 Feb 10, 2022
Fast SHAP value computation for interpreting tree-based models

FastTreeSHAP FastTreeSHAP package is built based on the paper Fast TreeSHAP: Accelerating SHAP Value Computation for Trees published in NeurIPS 2021 X

LinkedIn 369 Jan 04, 2023
机器学习、深度学习、自然语言处理等人工智能基础知识总结。

说明 机器学习、深度学习、自然语言处理基础知识总结。 目前主要参考李航老师的《统计学习方法》一书,也有一些内容例如XGBoost、聚类、深度学习相关内容、NLP相关内容等是书中未提及的。

Peter 445 Dec 12, 2022
Implements an infinite sum of poisson-weighted convolutions

An infinite sum of Poisson-weighted convolutions Kyle Cranmer, Aug 2018 If viewing on GitHub, this looks better with nbviewer: click here Consider a v

Kyle Cranmer 26 Dec 07, 2022
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

Adversarial Neuron Pruning Purifies Backdoored Deep Models Code for NeurIPS 2021 "Adversarial Neuron Pruning Purifies Backdoored Deep Models" by Dongx

Dongxian Wu 31 Dec 11, 2022
Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks

Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks by Ángel López García-Arias, Masanori Hashimoto, Masato Motomura, and J

Ángel López García-Arias 4 May 19, 2022
Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021) Hang Zhou, Yasheng Sun, Wayne Wu, Chen Cha

Hang_Zhou 628 Dec 28, 2022
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
[CVPR 2022] Official Pytorch code for OW-DETR: Open-world Detection Transformer

OW-DETR: Open-world Detection Transformer (CVPR 2022) [Paper] Akshita Gupta*, Sanath Narayan*, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Sh

Akshita Gupta 127 Dec 27, 2022
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
PyTorch implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy

Anomaly Transformer in PyTorch This is an implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. This pape

spencerbraun 160 Dec 19, 2022
This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

DV Lab 73 Sep 18, 2022
InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing

InsTrim The paper: InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing Build Prerequisite llvm-8.0-dev clang-8.0 cmake = 3.2 Make git cl

75 Dec 23, 2022
An ML & Correlation platform for transforming disparate data points of interest into usable intelligence.

SSIDprobeCollector An ML & Correlation platform for transforming disparate data points of interest into usable intelligence. At a High level the platf

Bill Reyor 1 Jan 30, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon

42 Dec 02, 2022