Corruption Invariant Learning for Re-identification

Overview

Corruption Invariant Learning for Re-identification

The official repository for Benchmarks for Corruption Invariant Person Re-identification (NeurIPS 2021 Track on Datasets and Benchmarks), with exhaustive study on corruption invariant learning in single- and cross-modality ReID datasets, including Market-1501-C, CUHK03-C, MSMT17-C, SYSU-MM01-C, RegDB-C.

PWC PWC PWC PWC PWC

Maintenance Plan

The benchmark will be maintained by the authors. We will get constant lectures about the new proposed ReID models and evaluate them under the CIL benchmark settings in time. Besides, we gladly take feedback to the CIL benchmark and welcome any contributions in terms of the new ReID models and corresponding evaluations. Please feel free to contact us, [email protected] .

TODO:

  • other datasets configurations
  • get started tutorial
  • more detailed statistical evaluations
  • checkpoints of the baseline models
  • cross-modality preson Re-ID dataset, CUHK-PEDES
  • other ReID datasets, like VehicleID, VeRi-776, etc.

(Note: codebase from TransReID)

Quick Start

1. Install dependencies

  • python=3.7.0
  • pytorch=1.6.0
  • torchvision=0.7.0
  • timm=0.4.9
  • albumentations=0.5.2
  • imagecorruptions=1.1.2
  • h5py=2.10.0
  • cython=0.29.24
  • yacs=0.1.6

2. Prepare dataset

Download the datasets, Market-1501, CUHK03, MSMT17. Set the root path of the dataset in congigs/Market/resnet_base.yml, DATASETS: ROOT_DIR: ('root'), or set it in scripts/train_market.sh, DATASETS.ROOT_DIR "('root')".

3. Train

Train a CIL model on Market-1501,

sh ./scripts/train_market.sh

4. Test

Test the CIL model on Market-1501,

sh ./scripts/eval_market.sh

Evaluating Corruption Robustness On-the-fly

Corruption Transform

The main code of corruption transform. (See contextual code in ./datasets/make_dataloader.py, line 59)

from imagecorruptions.corruptions import *

corruption_function = [gaussian_noise, shot_noise, impulse_noise, defocus_blur,
    glass_blur, motion_blur, zoom_blur, snow, frost, fog, brightness, contrast,
    elastic_transform, pixelate, jpeg_compression, speckle_noise,
    gaussian_blur, spatter, saturate, rain]
    
class corruption_transform(object):
    def __init__(self, level=0, type='all'):
        self.level = level
        self.type = type

    def __call__(self, img):
        if self.level > 0 and self.level < 6:
            level_idx = self.level
        else:
            level_idx = random.choice(range(1, 6))
        if self.type == 'all':
            corrupt_func = random.choice(corruption_function)
        else:
            func_name_list = [f.__name__ for f in corruption_function]
            corrupt_idx = func_name_list.index(self.type)
            corrupt_func = corruption_function[corrupt_idx]
        c_img = corrupt_func(img.copy(), severity=level_idx)
        img = Image.fromarray(np.uint8(c_img))
        return img

Evaluating corruption robustness can be realized on-the-fly by modifing the transform function uesed in test dataloader. (See details in ./datasets/make_dataloader.py, Line 266)

val_with_corruption_transforms = T.Compose([
    corruption_transform(0),
    T.Resize(cfg.INPUT.SIZE_TEST),
    T.ToTensor(),])

Rain details

We introduce a rain corruption type, which is a common type of weather condition, but it is missed by the original corruption benchmark. (See details in ./datasets/make_dataloader.py, Line 27)

def rain(image, severity=1):
    if severity == 1:
        type = 'drizzle'
    elif severity == 2 or severity == 3:
        type = 'heavy'
    elif severity == 4 or severity == 5:
        type = 'torrential'
    blur_value = 2 + severity
    bright_value = -(0.05 + 0.05 * severity)
    rain = abm.Compose([
        abm.augmentations.transforms.RandomRain(rain_type=type, 
        blur_value=blur_value, brightness_coefficient=1, always_apply=True),
        abm.augmentations.transforms.RandomBrightness(limit=[bright_value, 
        bright_value], always_apply=True)])
    width, height = image.size
    if height <= 60:
        scale_factor = 65.0 / height
        new_size = (int(width * scale_factor), 65)
        image = image.resize(new_size)
    return rain(image=np.array(image))['image']

Baselines

  • Single-modality datasets
                                                                                   
Dataset Method Clean Eval. Corruption Eval.
mINP mAP Rank-1 mINP mAP Rank-1
Market-1501 BoT 59.30 85.06 93.38 0.20 8.42 27.05
AGW 64.03 86.51 94.00 0.35 12.13 31.90
SBS 60.03 88.33 95.90 0.29 11.54 34.13
CIL (ours) 57.90 84.04 93.38 1.76 (0.13) 28.03 (0.45) 55.57 (0.63)
MSMT17 BoT 9.91 48.34 73.53 0.07 5.28 20.20
AGW 12.38 51.84 75.21 0.08 6.53 22.77
SBS 10.26 56.62 82.02 0.05 7.89 28.77
CIL (ours) 12.45 52.40 76.10 0.32 (0.03) 15.33 (0.20) 39.79 (0.45)
CUHK03  AGW   49.97   62.25   64.64   0.46   3.45  5.90 
 CIL (ours)   53.87   65.16   67.29   4.25 (0.39)   16.33 (0.76)   22.96 (1.04) 
  • Cross-modality datasets

Note: For RegDB dataset, Mode A and Mode B represent visible-to-thermal and thermal-to-visible experimental settings, respectively. And for SYSU-MM01 dataset, Mode A and Mode B represent all search and indoor search respectively. Note that we only corrupt RGB (visible) images in the corruption evaluation.

                                                                                                                                                                                                                                                                     
Dataset Method Mode A Mode B
Clean Eval. Corruption Eval. Clean Eval. Corruption Eval.
mINP mAP R-1 mINP mAP R-1 mINP mAP R-1 mINP mAP R-1
SYSU-MM01  AGW   36.17   47.65   47.50   14.73   29.99   34.42   59.74   62.97   54.17   35.39   40.98   33.80 
 CIL (ours)   38.15   47.64   45.51   22.48 (1.65)   35.92 (1.22)   36.95 (0.67)   57.41   60.45   50.98   43.11 (4.19)   48.65 (4.57)   40.73 (5.55) 
RegDB  AGW   54.10   68.82   75.78   32.88   43.09   45.44   52.40   68.15   75.29   6.00   41.37   67.54 
 CIL (ours)   55.68   69.75   74.96   38.66 (0.01)   49.76 (0.03)   52.25 (0.03)   55.50   69.21   74.95   11.94 (0.12)   47.90 (0.01)   67.17 (0.06)

Recent Advance in Person Re-ID

Leaderboard

Market1501-C

(Note: ranked by mAP on corrupted test set)

Method Reference Clean Eval. Corruption Eval.
mINP mAP Rank-1 mINP mAP Rank-1
TransReID Shuting He et al. (2021) 69.29 88.93 95.07 1.98 27.38 53.19
CaceNet Fufu Yu et al. (2020) 70.47 89.82 95.40 0.67 18.24 42.92
LightMBN Fabian Herzog et al. (2021) 73.29 91.54 96.53 0.50 14.84 38.68
PLR-OS Ben Xie et al. (2020) 66.42 88.93 95.19 0.48 14.23 37.56
RRID Hyunjong Park et al. (2019) 67.14 88.43 95.19 0.46 13.45 36.57
Pyramid Feng Zheng et al. (2018) 61.61 87.50 94.86 0.36 12.75 35.72
PCB Yifan Sun et al.(2017) 41.97 82.19 94.15 0.41 12.72 34.93
BDB Zuozhuo Dai et al. (2018) 61.78 85.47 94.63 0.32 10.95 33.79
Aligned++ Hao Luo et al. (2019) 47.31 79.10 91.83 0.32 10.95 31.00
AGW Mang Ye et al. (2020) 65.40 88.10 95.00 0.30 10.80 33.40
MHN Binghui Chen et al. (2019) 55.27 85.33 94.50 0.38 10.69 33.29
LUPerson Dengpan Fu et al. (2020) 68.71 90.32 96.32 0.29 10.37 32.22
OS-Net Kaiyang Zhou et al. (2019) 56.78 85.67 94.69 0.23 10.37 30.94
VPM Yifan Sun et al. (2019) 50.09 81.43 93.79 0.31 10.15 31.17
DG-Net Zhedong Zheng et al. (2019) 61.60 86.09 94.77 0.35 9.96 31.75
ABD-Net Tianlong Chen et al. (2019) 64.72 87.94 94.98 0.26 9.81 29.65
MGN Guanshuo Wang et al.(2018) 60.86 86.51 93.88 0.29 9.72 29.56
F-LGPR Yunpeng Gong et al. (2021) 65.48 88.22 95.37 0.23 9.08 29.35
TDB Rodolfo Quispe et al. (2020) 56.41 85.77 94.30 0.20 8.90 28.56
LGPR Yunpeng Gong et al. (2021) 58.71 86.09 94.51 0.24 8.26 27.72
BoT Hao Luo et al. (2019) 51.00 83.90 94.30 0.10 6.60 26.20

CUHK03-C (detected)

(Note: ranked by mAP on corrupted test set)

Method Reference Clean Eval. Corruption Eval.
mINP mAP Rank-1 mINP mAP Rank-1
CaceNet Fufu Yu et al. (2020) 65.22 75.13 77.64 2.09 10.62 17.04
Pyramid Feng Zheng et al. (2018) 61.41 73.14 79.54 1.10 8.03 10.42
RRID Hyunjong Park et al. (2019) 55.81 67.63 74.99 1.00 7.30 9.66
PLR-OS Ben Xie et al. (2020) 62.72 74.67 78.14 0.89 6.49 10.99
Aligned++ Hao Luo et al. (2019) 47.32 59.76 62.07 0.56 4.87 7.99
MGN Guanshuo Wang et al.(2018) 51.18 62.73 69.14 0.46 4.20 5.44
MHN Binghui Chen et al. (2019) 56.52 66.77 72.21 0.46 3.97 8.27

MSMT17-C (Version 2)

(Note: ranked by mAP on corrupted test set)

Method Reference Clean Eval. Corruption Eval.
mINP mAP Rank-1 mINP mAP Rank-1
OS-Net Kaiyang Zhou et al. (2019) 4.05 40.05 71.86 0.08 7.86 28.51
AGW Mang Ye et al. (2020) 12.38 51.84 75.21 0.08 6.53 22.77
BoT Hao Luo et al. (2019) 9.91 48.34 73.53 0.07 5.28 20.20

Citation

Kindly include a reference to this paper in your publications if it helps your research:

@misc{chen2021benchmarks,
    title={Benchmarks for Corruption Invariant Person Re-identification},
    author={Minghui Chen and Zhiqiang Wang and Feng Zheng},
    year={2021},
    eprint={2111.00880},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Owner
Minghui Chen
Minghui Chen
Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)

One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022) Paper | Demo Requirements Python = 3.6 , Pytorch

FuxiVirtualHuman 84 Jan 03, 2023
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022
Improving the robustness and performance of biomedical NLP models through adversarial training

RobustBioNLP Improving the robustness and performance of biomedical NLP models through adversarial training In this repository you can find suppliment

Milad Moradi 3 Sep 20, 2022
Out-of-boundary View Synthesis towards Full-frame Video Stabilization

Out-of-boundary View Synthesis towards Full-frame Video Stabilization Introduction | Update | Results Demo | Introduction This repository contains the

25 Oct 10, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 05, 2023
A annotation of yolov5-5.0

代码版本:0714 commit #4000 $ git clone https://github.com/ultralytics/yolov5 $ cd yolov5 $ git checkout 720aaa65c8873c0d87df09e3c1c14f3581d4ea61 这个代码只是注释版

Laughing 229 Dec 17, 2022
Tools for manipulating UVs in the Blender viewport.

UV Tool Suite for Blender A set of tools to make editing UVs easier in Blender. These tools can be accessed wither through the Kitfox - UV panel on th

35 Oct 29, 2022
Collaborative forensic timeline analysis

Timesketch Table of Contents About Timesketch Getting started Community Contributing About Timesketch Timesketch is an open-source tool for collaborat

Google 2.1k Dec 28, 2022
Rax is a Learning-to-Rank library written in JAX

🦖 Rax: Composable Learning to Rank using JAX Rax is a Learning-to-Rank library written in JAX. Rax provides off-the-shelf implementations of ranking

Google 247 Dec 27, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Scalable and Elastic Deep Reinforcement Learning Using PyTorch. Please star. 🔥

ElegantRL “小雅”: Scalable and Elastic Deep Reinforcement Learning ElegantRL is developed for researchers and practitioners with the following advantage

AI4Finance Foundation 2.5k Jan 05, 2023
A treasure chest for visual recognition powered by PaddlePaddle

简体中文 | English PaddleClas 简介 飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。 近期更新 2021.11.1 发布PP-ShiTu技术报告,新增饮料识别demo 2021.10.23 发

4.6k Dec 31, 2022
The Submission for SIMMC 2.0 Challenge 2021

The Submission for SIMMC 2.0 Challenge 2021 challenge website Requirements python 3.8.8 pytorch 1.8.1 transformers 4.8.2 apex for multi-gpu nltk Prepr

5 Jul 26, 2022
EfficientNetV2 implementation using PyTorch

EfficientNetV2-S implementation using PyTorch Train Steps Configure imagenet path by changing data_dir in train.py python main.py --benchmark for mode

Jahongir Yunusov 86 Dec 29, 2022
Chatbot in 200 lines of code using TensorLayer

Seq2Seq Chatbot This is a 200 lines implementation of Twitter/Cornell-Movie Chatbot, please read the following references before you read the code: Pr

TensorLayer Community 820 Dec 17, 2022
Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation

NorCal Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation On Model Calibration for Long-Tailed Object Detec

Tai-Yu (Daniel) Pan 24 Dec 25, 2022
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023
Use deep learning, genetic programming and other methods to predict stock and market movements

StockPredictions Use classic tricks, neural networks, deep learning, genetic programming and other methods to predict stock and market movements. Both

Linda MacPhee-Cobb 386 Jan 03, 2023
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022