Rethinking of Pedestrian Attribute Recognition: A Reliable Evaluation under Zero-Shot Pedestrian Identity Setting

Overview

Rethinking of Pedestrian Attribute Recognition: A Reliable Evaluation under Zero-Shot Pedestrian Identity Setting (official Pytorch implementation)

zero-shot This paper submitted to TIP is the extension of the previous Arxiv paper.

This project aims to

  1. provide a baseline of pedestrian attribute recognition.
  2. provide two new datasets RAPzs and PETAzs following zero-shot pedestrian identity setting.
  3. provide a general training pipeline for pedestrian attribute recognition and multi-label classification task.

This project provide

  1. DDP training, which is mainly used for multi-label classifition.
  2. Training on all attributes, testing on "selected" attribute. Because the proportion of positive samples for other attributes is less than a threshold, such as 0.01.
    1. For PETA and PETAzs, 35 of the 105 attributes are selected for performance evaluation.
    2. For RAPv1, 51 of the 92 attributes are selected for performance evaluation.
    3. For RAPv2 and RAPzs, 54 and 53 of the 152 attributes are selected for performance evaluation.
    4. For PA100k, all attributes are selected for performance evaluation.
    • However, training on all attributes can not bring consistent performance improvement on various datasets.
  3. EMA model.
  4. Transformer-base model, such as swin-transformer (with a huge performance improvement) and vit.
  5. Convenient dataset info file like dataset_all.pkl

Dataset Info

  • PETA: Pedestrian Attribute Recognition At Far Distance [Paper][Project]

  • PA100K[Paper][Github]

  • RAP : A Richly Annotated Dataset for Pedestrian Attribute Recognition

  • PETAzs & RAPzs : Rethinking of Pedestrian Attribute Recognition: A Reliable Evaluation under Zero-Shot Pedestrian Identity Setting Paper [Project]

Performance

Pedestrian Attribute Recognition

Datasets Models ma Acc Prec Rec F1
PA100k resnet50 80.21 79.15 87.79 87.01 87.40
-- resnet50* 79.85 79.13 89.45 85.40 87.38
-- resnet50 + EMA 81.97 80.20 88.06 88.17 88.11
-- bninception 79.13 78.19 87.42 86.21 86.81
-- TresnetM 74.46 68.72 79.82 80.71 80.26
-- swin_s 82.19 80.35 87.85 88.51 88.18
-- vit_s 79.40 77.61 86.41 86.22 86.32
-- vit_b 81.01 79.38 87.60 87.49 87.55
PETA resnet50 83.96 78.65 87.08 85.62 86.35
PETAzs resnet50 71.43 58.69 74.41 69.82 72.04
RAPv1 resnet50 79.27 67.98 80.19 79.71 79.95
RAPv2 resnet50 78.52 66.09 77.20 80.23 78.68
RAPzs resnet50 71.76 64.83 78.75 76.60 77.66
  • The resnet* model is trained by using the weighted function proposed by Tan in AAAI2020.
  • Performance in PETAzs and RAPzs based on the first version of PETAzs and RAPzs as described in paper.
  • Experiments are conducted on the input size of (256, 192), so there may be minor differences from the results in the paper.
  • The reported performance can be achieved at the first drop of learning rate. We also take this model as the best model.
  • Pretrained models are provided now at Google Drive.

Multi-label Classification

Datasets Models mAP CP CR CF1 OP OR OF1
COCO resnet101 82.75 84.17 72.07 77.65 85.16 75.47 80.02

Pretrained Models

Dependencies

  • python 3.7
  • pytorch 1.7.0
  • torchvision 0.8.2
  • cuda 10.1

Get Started

  1. Run git clone https://github.com/valencebond/Rethinking_of_PAR.git
  2. Create a directory to dowload above datasets.
    cd Rethinking_of_PAR
    mkdir data
    
  3. Prepare datasets to have following structure:
    ${project_dir}/data
        PETA
            images/
            PETA.mat
            dataset_all.pkl
            dataset_zs_run0.pkl
        PA100k
            data/
            dataset_all.pkl
        RAP
            RAP_dataset/
            RAP_annotation/
            dataset_all.pkl
        RAP2
            RAP_dataset/
            RAP_annotation/
            dataset_zs_run0.pkl
        COCO14
            train2014/
            val2014/
            ml_anno/
                category.json
                coco14_train_anno.pkl
                coco14_val_anno.pkl
    
  4. Train baseline based on resnet50
    sh train.sh
    

Acknowledgements

Codes are based on the repository from Dangwei Li and Houjing Huang. Thanks for their released code.

Citation

If you use this method or this code in your research, please cite as:

@article{jia2021rethinking,
  title={Rethinking of Pedestrian Attribute Recognition: A Reliable Evaluation under Zero-Shot Pedestrian Identity Setting},
  author={Jia, Jian and Huang, Houjing and Chen, Xiaotang and Huang, Kaiqi},
  journal={arXiv preprint arXiv:2107.03576},
  year={2021}
}
Owner
Jian
computer vision
Jian
BTC-Generator - BTC Generator With Python

Что такое BTC-Generator? Это генератор чеков всеми любимого @BTC_BANKER_BOT Для

DoomGod 3 Aug 24, 2022
This repository is all about spending some time the with the original problem posed by Minsky and Papert

This repository is all about spending some time the with the original problem posed by Minsky and Papert. Working through this problem is a great way to begin learning computer vision.

Jaissruti Nanthakumar 1 Jan 23, 2022
交互式标注软件,暂定名 iann

iann 交互式标注软件,暂定名iann。 安装 按照官网介绍安装paddle。 安装其他依赖 pip install -r requirements.txt 运行 git clone https://github.com/PaddleCV-SIG/iann/ cd iann python iann

294 Dec 30, 2022
bespoke tooling for offensive security's Windows Usermode Exploit Dev course (OSED)

osed-scripts bespoke tooling for offensive security's Windows Usermode Exploit Dev course (OSED) Table of Contents Standalone Scripts egghunter.py fin

epi 268 Jan 05, 2023
Deep Latent Force Models

Deep Latent Force Models This repository contains a PyTorch implementation of the deep latent force model (DLFM), presented in the paper, Compositiona

Tom McDonald 5 Oct 26, 2022
Official implementation of the paper DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows

DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows Official implementation of the paper DeFlow: Learning Complex Im

Valentin Wolf 86 Nov 16, 2022
PyTorch implementation of Super SloMo by Jiang et al.

Super-SloMo PyTorch implementation of "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation" by Jiang H., Sun

Avinash Paliwal 2.9k Jan 03, 2023
To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types

To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types, from a Database Taken From Dr. Wolberg reports his Clinic Cases.

Astitva Veer Garg 1 Jul 31, 2022
TAUFE: Task-Agnostic Undesirable Feature DeactivationUsing Out-of-Distribution Data

A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power. However, its prediction can be easily biased to undesirable features, whi

KAIST Data Mining Lab 8 Dec 07, 2022
Self-Supervised CNN-GCN Autoencoder

GCNDepth Self-Supervised CNN-GCN Autoencoder GCNDepth: Self-supervised monocular depth estimation based on graph convolutional network To be published

53 Dec 14, 2022
SciFive: a text-text transformer model for biomedical literature

SciFive SciFive provided a Text-Text framework for biomedical language and natural language in NLP. Under the T5's framework and desrbibed in the pape

Long Phan 54 Dec 24, 2022
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
Preparation material for Dropbox interviews

Dropbox-Onsite-Interviews A guide for the Dropbox onsite interview! The Dropbox interview question bank is very small. The bank has been in a Chinese

386 Dec 31, 2022
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 123 Jan 06, 2023
The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Energy-based Conditional Generative Adversarial Network (ECGAN) This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without

sianchen 22 May 28, 2022
MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions

MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions Project Page | Paper If you find our work useful for your research, please con

96 Jan 04, 2023
Answering Open-Domain Questions of Varying Reasoning Steps from Text

This repository contains the authors' implementation of the Iterative Retriever, Reader, and Reranker (IRRR) model in the EMNLP 2021 paper "Answering Open-Domain Questions of Varying Reasoning Steps

26 Dec 22, 2022
Codes for paper "KNAS: Green Neural Architecture Search"

KNAS Codes for paper "KNAS: Green Neural Architecture Search" KNAS is a green (energy-efficient) Neural Architecture Search (NAS) approach. It contain

90 Dec 22, 2022