Experiment about Deep Person Re-identification with EfficientNet-v2

Overview

deep-efficient-person-reid

Experiment for an uni project with strong baseline for Person Re-identification task.

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and CUHK03.


Pipeline

pipeline


Implementation Details

  • Random Erasing to transform input images.
  • EfficientNet-v2 / Resnet50 / Resnet50-IBN-A as backbone.
  • Stride = 1 for last convolution layer. Embedding size for Resnet50 / Resnet50-IBN-A is 2048, while for EfficientNet-v2 is 1280. During inference, embedding features will run through a batch norm layer, as known as a bottleneck for better normalization.
  • Loss function combining 3 losses:
    1. Triplet Loss with Hard Example Mining.
    2. Classification Loss (Cross Entropy) with Label Smoothing.
    3. Centroid Loss - Center Loss for reducing the distance of embeddings to its class center. When combining it with Classification Loss, it helps preventing embeddings from collapsing.
  • The default optimizer is AMSgrad with base learning rate of 3.5e-4 and multistep learning rate scheduler, decayed at epoch 30th and epoch 55th. Besides, we also apply mixed precision in training.
  • In both datasets, pretrained models were trained for 60 epochs and non-pretrained models were trained for 100 epochs.

Source Structure

.
├── config                  # hyperparameters settings
│   └── ...                 # yaml files
├
├── datasets                # data loader
│   └── ...           
├
├── market1501              # market-1501 dataset
|
├── cuhk03_release          # cuhk03 dataset
|
├── samplers                # random samplers
│   └── ...
|
├── loggers                 # test weights and visualization results      
|   └── runs
|   
├── losses                  # loss functions
│   └── ...   
|
├── nets                    # models
│   └── bacbones            
│       └── ... 
│   
├── engine                  # training and testing procedures
│   └── ...    
|
├── metrics                 # mAP and re-ranking
│   └── ...   
|
├── utils                   # wrapper and util functions 
│   └── ...
|
├── train.py                # train code 
|
├── test.py                 # test code 
|
├── visualize.py            # visualize results 

Pretrained Models (on ImageNet)

  • EfficientNet-v2: link
  • Resnet50-IBN-A: link

Notebook

  • Notebook to train, inference and visualize: Notebook

Setup


  • Install dependencies, change directory to dertorch:
pip install -r requirements.txt
cd dertorch/

  • Modify config files in /configs/. You can play with the parameters for better training, testing.

  • Training:
python train.py --config_file=name_of_config_file
Ex: python train.py --config_file=efficientnetv2_market

  • Testing: Save in /loggers/runs, for example the result from EfficientNet-v2 (Market-1501): link
python test.py --config_file=name_of_config_file
Ex: python test.py --config_file=efficientnetv2_market

  • Visualization: Save in /loggers/runs/results/, for example the result from EfficienNet-v2 (Market-1501): link
python visualize.py --config_file=name_of_config_file
Ex: python visualize.py --config_file=efficientnetv2_market

Examples


Query image 1 query1


Result image 1 result1


Query image 2 query2


Result image 2 result2


Results

  • Market-1501
Models Image Size mAP Rank-1 Rank-5 Rank-10 weights
Resnet50 (non-pretrained) 256x128 51.8 74.0 88.2 93.0 link
EfficientNet-v2 (non-pretrained) 256x128 56.5 78.5 91.1 94.4 link
Resnet50-IBN-A 256x128 77.1 90.7 97.0 98.4 link
EfficientNet-v2 256x128 69.7 87.1 95.3 97.2 link
Resnet50-IBN-A + Re-ranking 256x128 89.8 92.1 96.5 97.7 link
EfficientNet-v2 + Re-ranking 256x128 85.6 89.9 94.7 96.2 link

  • CUHK03:
Models Image Size mAP Rank-1 Rank-5 Rank-10 weights
Resnet50 (non-pretrained) ... ... ... ... ... ...
EfficientNet-v2 (non-pretrained) 256x128 10.1 10.1 21.1 29.5 link
Resnet50-IBN-A 256x128 41.2 41.8 63.1 71.2 link
EfficientNet-v2 256x128 40.6 42.9 63.1 72.5 link
Resnet50-IBN-A + Re-ranking 256x128 55.6 51.2 64.0 72.0 link
EfficientNet-v2 + Re-ranking 256x128 56.0 51.4 64.7 73.4 link

The results from EfficientNet-v2 models might be better if fine-tuning properly and longer training epochs, while here we use the best parameters for the ResNet models (on Market-1501 dataset) from this paper and only trained for 60 - 100 epochs.


Citation

@article{DBLP:journals/corr/abs-2104-13643,
  author    = {Mikolaj Wieczorek and
               Barbara Rychalska and
               Jacek Dabrowski},
  title     = {On the Unreasonable Effectiveness of Centroids in Image Retrieval},
  journal   = {CoRR},
  volume    = {abs/2104.13643},
  year      = {2021},
  url       = {https://arxiv.org/abs/2104.13643},
  archivePrefix = {arXiv},
  eprint    = {2104.13643},
  timestamp = {Tue, 04 May 2021 15:12:43 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2104-13643.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
@InProceedings{Luo_2019_CVPR_Workshops,
author = {Luo, Hao and Gu, Youzhi and Liao, Xingyu and Lai, Shenqi and Jiang, Wei},
title = {Bag of Tricks and a Strong Baseline for Deep Person Re-Identification},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}

Adapted from: michuanhaohao

Owner
lan.nguyen2k
Tensor Boy
lan.nguyen2k
Official PyTorch implementation of the paper: DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample

DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample (ICCV 2021 Oral) Project | Paper Official PyTorch implementation of the pape

Eliahu Horwitz 393 Dec 22, 2022
Pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion"

MOSNet pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion" https://arxiv.org/abs/1904.08352 Dependency L

9 Nov 18, 2022
ROS support for Velodyne 3D LIDARs

Overview Velodyne1 is a collection of ROS2 packages supporting Velodyne high definition 3D LIDARs3. Warning: The master branch normally contains code

ROS device drivers 543 Dec 30, 2022
Laser device for neutralizing - mosquitoes, weeds and pests

Laser device for neutralizing - mosquitoes, weeds and pests (in progress) Here I will post information for creating a laser device. A warning!! How It

Ildaron 1k Jan 02, 2023
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognition

Light-SERNet This is the Tensorflow 2.x implementation of our paper "Light-SERNet: A lightweight fully convolutional neural network for speech emotion

Arya Aftab 29 Nov 12, 2022
Data, model training, and evaluation code for "PubTables-1M: Towards a universal dataset and metrics for training and evaluating table extraction models".

PubTables-1M This repository contains training and evaluation code for the paper "PubTables-1M: Towards a universal dataset and metrics for training a

Microsoft 365 Jan 04, 2023
Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
Nest - A flexible tool for building and sharing deep learning modules

Nest - A flexible tool for building and sharing deep learning modules Nest is a flexible deep learning module manager, which aims at encouraging code

ZhouYanzhao 41 Oct 10, 2022
UFPR-ADMR-v2 Dataset

UFPR-ADMR-v2 Dataset The UFPR-ADMRv2 dataset contains 5,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), w

Gabriel Salomon 8 Sep 29, 2022
abess: Fast Best-Subset Selection in Python and R

abess: Fast Best-Subset Selection in Python and R Overview abess (Adaptive BEst Subset Selection) library aims to solve general best subset selection,

297 Dec 21, 2022
Tool cek opsi checkpoint facebook!

tool apa ini? cek_opsi_facebook adalah sebuah tool yang mengecek opsi checkpoint akun facebook yang terkena checkpoint! tujuan dibuatnya tool ini? too

Muhammad Latif Harkat 2 Jul 17, 2022
Code for Paper: Self-supervised Learning of Motion Capture

Self-supervised Learning of Motion Capture This is code for the paper: Hsiao-Yu Fish Tung, Hsiao-Wei Tung, Ersin Yumer, Katerina Fragkiadaki, Self-sup

Hsiao-Yu Fish Tung 87 Jul 25, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Madry Lab 35 Nov 14, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 02, 2023
Customer Segmentation using RFM

Customer-Segmentation-using-RFM İş Problemi Bir e-ticaret şirketi müşterilerini segmentlere ayırıp bu segmentlere göre pazarlama stratejileri belirlem

Nazli Sener 7 Dec 26, 2021
Pytorch Lightning 1.2k Jan 06, 2023
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.

PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation Description Our work is accpeted by AAAI 2022. Picture: We propose a domain-general

39 Dec 05, 2022
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Zhiliang Peng 2.3k Jan 04, 2023
Lipschitz-constrained Unsupervised Skill Discovery

Lipschitz-constrained Unsupervised Skill Discovery This repository is the official implementation of Seohong Park, Jongwook Choi*, Jaekyeom Kim*, Hong

Seohong Park 17 Dec 18, 2022