Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

Overview

QAConv

Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

This PyTorch code is proposed in our paper [1]. A Chinese blog is available in 再见,迁移学习?可解释和泛化的行人再辨识.

Updates

  • 9/19/2021: Include TransMatcher, a transformer based deep image matching method based on QAConv 2.0.
  • 9/16/2021: QAConv 2.1: simplify graph sampling, implement the Einstein summation for QAConv, use the batch hard triplet loss, design an adaptive epoch and learning rate scheduling method, and apply the automatic mixed precision training.
  • 4/1/2021: QAConv 2.0 [2]: include a new sampler called Graph Sampler (GS), and remove the class memory. This version is much more efficient in learning. See the updated results.
  • 3/31/2021: QAConv 1.2: include some popular data augmentation methods, and change the ranking.py implementation to the original open-reid version, so that it is more consistent to most other implementations (e.g. open-reid, torch-reid, fast-reid).
  • 2/7/2021: QAConv 1.1: an important update, which includes a pre-training function for a better initialization, so that the results are now more stable.
  • 11/26/2020: Include the IBN-Net as backbone, and the RandPerson dataset.

Requirements

  • Pytorch (>1.0)
  • sklearn
  • scipy

Usage

Download some public datasets (e.g. Market-1501, CUHK03-NP, MSMT) on your own, extract them in some folder, and then run the followings.

Training and test

python main.py --dataset market --testset cuhk03_np_detected[,msmt] [--data-dir ./data] [--exp-dir ./Exp]

For more options, run "python main.py --help". For example, if you want to use the ResNet-152 as backbone, specify "-a resnet152". If you want to train on the whole dataset (as done in our paper for the MSMT17), specify "--combine_all".

With the GS sampler and pairwise matching loss, run the following:

python main_gs.py --dataset market --testset cuhk03_np_detected[,msmt] [--data-dir ./data] [--exp-dir ./Exp]

Test only

python main.py --dataset market --testset duke[,market,msmt] [--data-dir ./data] [--exp-dir ./Exp] --evaluate

Performance

Performance (%) of QAConv 2.1 under direct cross-dataset evaluation without transfer learning or domain adaptation:

Training Data Version Training Hours CUHK03-NP Market-1501 MSMT17
Rank-1 mAP Rank-1 mAP Rank-1 mAP
Market QAConv 1.0 1.33 9.9 8.6 - - 22.6 7.0
QAConv 2.1 0.25 19.1 18.1 - - 45.9 17.2
MSMT QAConv 2.1 0.73 20.9 20.6 79.1 49.5 - -
MSMT (all) QAConv 1.0 26.90 25.3 22.6 72.6 43.1 - -
QAConv 2.1 3.42 27.6 28.0 82.4 56.9 - -
RandPerson QAConv 2.1 2.33 17.9 16.1 75.9 46.3 44.1 15.2

Contacts

Shengcai Liao
Inception Institute of Artificial Intelligence (IIAI)
[email protected]

Citation

[1] Shengcai Liao and Ling Shao, "Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting." In the 16th European Conference on Computer Vision (ECCV), 23-28 August, 2020.

[2] Shengcai Liao and Ling Shao, "Graph Sampling Based Deep Metric Learning for Generalizable Person Re-Identification." In arXiv preprint, arXiv:2104.01546, 2021.

@inproceedings{Liao-ECCV2020-QAConv,  
  title={{Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting}},  
  author={Shengcai Liao and Ling Shao},  
  booktitle={European Conference on Computer Vision (ECCV)},  
  year={2020}  
}

@article{Liao-arXiv2021-GS,
  author    = {Shengcai Liao and Ling Shao},
  title     = {{Graph Sampling Based Deep Metric Learning for Generalizable Person Re-Identification}},
  journal   = {CoRR},
  volume    = {abs/2104.01546},
  year      = {April 4, 2021},
  url       = {http://arxiv.org/abs/2104.01546},
  archivePrefix = {arXiv},
  eprint    = {2104.01546}
}
Comments
  • Out of memory,--test_fea_batch --test_gal_batch --test_prob_batch all had seted to 128

    Out of memory,--test_fea_batch --test_gal_batch --test_prob_batch all had seted to 128

    main.py --dataset market --testset msmt --data-dir ./reid/datasets/ --exp-dir ./Exp

    fpaths:./reid/datasets/market/bounding_box_train/1500_c6s3_086567_01.jpg fpaths:./reid/datasets/market/bounding_box_test/1501_c6s4_001902_01.jpg fpaths:./reid/datasets/market/query/1501_c6s4_001877_00.jpg Market dataset loaded subset | # ids | # images

    train | 751 | 12935 query | 750 | 3367 gallery | 751 | 15912

    • Finished epoch 1 at lr=[0.0005, 0.005, 0.005]. Loss: 14.812. Acc: 54.97%. Training time: 174 seconds.

    • Finished epoch 2 at lr=[0.0005, 0.005, 0.005]. Loss: 13.333. Acc: 61.35%. Training time: 344 seconds.

    • Finished epoch 3 at lr=[0.0005, 0.005, 0.005]. Loss: 11.447. Acc: 68.55%. Training time: 514 seconds.

    • Finished epoch 4 at lr=[0.0005, 0.005, 0.005]. Loss: 10.338. Acc: 72.09%. Training time: 684 seconds.

    • Finished epoch 5 at lr=[0.0005, 0.005, 0.005]. Loss: 9.319. Acc: 75.31%. Training time: 855 seconds.

    Decay the learning rate by a factor of 0.1. Final epochs: 7.

    • Finished epoch 6 at lr=[5e-05, 0.0005, 0.0005]. Loss: 8.566. Acc: 77.75%. Training time: 1025 seconds.

    • Finished epoch 7 at lr=[5e-05, 0.0005, 0.0005]. Loss: 7.732. Acc: 80.22%. Training time: 1195 seconds.

    The learning converges at epoch 7.

    Evaluate the learned model: test_names: ['msmt'] MSMT dataset loaded subset | # ids | # images

    train | 1041 | 32621 query | 3060 | 11659 gallery | 3060 | 82161 /home/luotao/anaconda3/envs/QAConv/lib/python3.6/site-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " Time: 2690.337 seconds. / 1284. similarity 1 / 1284.
    已杀死

    run: python main.py --dataset market --testset msmt --data-dir ./reid/datasets/ --exp-dir ./Exp
    --test_fea_batch --test_gal_batch --test_prob_batch all set to 128. Time: xx seconds, /xx similarity xx/xx. 已杀死.
    Those three parameters set to 64, the errer : Time: 2690.337 seconds. / 1284. similarity 1 / 1284. 已杀死.

    opened by huangpan2507 16
  • Unstable results

    Unstable results

    Hi, Thanks for sharing your code. However, I ran your code twice and get quite different results. Maybe due to random seed? So did you set a fixed random seed when you train the model?

    good first issue 
    opened by HeliosZhao 10
  • 训练非常慢!

    训练非常慢!

    你好,我使用2块2080TI训练30W数据,64 batch_size 并且使用了fp16来加速训练,但是一个epoch训练了半个多小时才到511 iter,这正常吗? Epoch: [1][511/4714] 455Time 2.620 (2.646)ec 0.0Data 0.001 (0.002) Loss 456.984 (520.544) Prec 0.00% (0.00%)

    opened by zengwb-lx 6
  • Unable to use ClassMemoryLoss to train the model

    Unable to use ClassMemoryLoss to train the model

    In the QAConv codes, I tried to modify the loss function to ClassMemoryLoss as the criterion but the acc is nearly zero. Is the ClassMemoryLoss available to use? Are ClassMemoryLoss and Focal Loss in the paper the same? The code is shown below.

    criterion = ClassMemoryLoss(matcher, num_classes, num_features, hei, wid).cuda()

    opened by ArminLee 4
  • Question about backbone

    Question about backbone

    Hi Mr.Liao, I appreciate much your novel idea and your code, and i notice that you choose ResNet as the backbone. ResNet152 has shown great results in the paper and in my own experiments , but it seems that it takes quite some time to train, even if we choose layer 3 of the model. Have you tried some lightweight backbone such as MobileNet? Is there any specific reason for choosing ResNet as feature extractor? Thanks in advance.

    opened by jingyut 4
  • graph sampling的疑问

    graph sampling的疑问

    廖老师您好,想了解一下为什么graph sampling对于domain generalized re-id能够有很好的提升效果?以往的domain generalized re-id方法往往是采用domain invariant learning, style normalization等方式来解决这一任务,但graph sampling好像跟以往的方法思路不同,是通过加强hard mining的方式来改善domain generalization;对这一点有些不太理解,期待您的回复,谢谢!

    opened by Terminator8758 3
  • Graph Sampler

    Graph Sampler

    thank u for ur work! I got 2 questions about Graph Sampling:

    1. intuitively, it should work on normal ReID task.
    2. The whole process is like: before training one epoch, the proposed sampler randomly select one img for each class, then computes a distmat for each img. The distmat represents distances between classes. So we can mine hardest samples in entire dataset, not a batch. But I didn't get where does "Graph" have connection to the process above. Looking forward to your help
    opened by liyuke65535 3
  • 关于s=1

    关于s=1

    廖老师您好,我想问一下关于s的取值问题。 您论文提到为了效率选择了s=1, 我是这么理解的,在不使用classmemory 而是使用pair wise match的情况下, 做一次QAconv的时间复杂度为O( B^2 * (HW)^2 * s )。 按照时间复杂度来的化, s取值的大一点或者小小一点感觉没有多影响。 但是,当s=1的时候,可以直接使用矩阵乘法,然后又因为矩阵乘法做了大量的优化,所以实际的时间大大缩短了。所以最终s=1. 不知我的理解是否有问题,望老师您赐教!

    opened by pSGAme 2
  • The Graph Sampling work 相关问题

    The Graph Sampling work 相关问题

    廖老师您好,读了您最近的The Graph Sampling work 论文,有两个问题不太清楚,想请教您一下,望您指点:

    1. 在每一个epoch建图的时候,随机采样每个类的一张图片会不会造成比较大的偏差?
    2. K=2处理梯度太小的问题时,会不会遇到完全采样不到的情况(以前在远大于学术数据集的业务数据集上遇到过着种问题,hardcase采样不到)
    opened by zhustrong 2
  • Issues about evaluators.py

    Issues about evaluators.py

    I use Market as the training dataset and Duke as the test dataset, when I use --do_tlift, it shows that the size of tensor are not match. image

    In the evaluators.py document, line 212, the original dist size is 222817661 in Market dataset, and the size of dist_rerank is 2228253 because the num_gal is not the same. The value of num_gal is the length of gallery images in the definition of line 189. However, it is redefined in line 204 as the size of gallery feature.

    opened by ArminLee 2
  • self.model.eval()

    self.model.eval()

    Recently, I have read your code for QAConv. Now, I have a question to consult you. In the train() method in trainer.py, the following codes class BaseTrainer(object): for i, inputs in enumerate(data_loader): self.model.eval() self.criterion.train() Why you don't set the model in train mode by using self.mode.train(), instead of using model.eval(). And, in the whole code of your project, I also found that there is no other place to use model. train().

    opened by xiaopanchen 2
  • Can't find qaconv_loss

    Can't find qaconv_loss

    Hello,

    First of all, thanks so much for your good work!

    Here is a question: inside the test_matching.py, you import from reid.loss.qaconv_loss import QAConvLoss, however, it seems that qaconv_loss is no longer here, so I change to other loss functions. Will it influence the performance?

    Thanks!

    opened by xyimaging 5
Releases(v2.1)
  • v2.1(Sep 16, 2021)

    • Simplified graph sampling
    • Einstein summation for QAConv
    • Hard triplet loss
    • Adaptive epoch and learning rate scheduling
    • Automatic mixed precision training
    Source code(tar.gz)
    Source code(zip)
  • v2.0(Apr 1, 2021)

    • Include a new sampler called Graph Sampler (GS).
    • Remove the class memory based loss. Instead, a pairwise matching loss is implemented.
    • This version is much more efficient in learning.
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Mar 31, 2021)

    Include some popular data augmentation methods, and change the ranking.py implementation to the original open-reid version, so that it is more consistent to most other implementations (e.g. open-reid, torch-reid, fast-reid).

    Source code(tar.gz)
    Source code(zip)
  • v1.1(Mar 30, 2021)

    • Include the IBN-Net as backbone, and the RandPerson dataset.
    • Include a pre-training function for a better initialization, so that the results are now more stable.
    Source code(tar.gz)
    Source code(zip)
  • v1.0-eccv(Aug 12, 2020)

Owner
Shengcai Liao
Lead Scientist, Ph.D. Inception Institute of Artificial Intelligence
Shengcai Liao
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity

SSD: Single Shot MultiBox Detector Introduction Here is my pytorch implementation of 2 models: SSD-Resnet50 and SSDLite-MobilenetV2.

Viet Nguyen 149 Jan 07, 2023
Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks

Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks by Ángel López García-Arias, Masanori Hashimoto, Masato Motomura, and J

Ángel López García-Arias 4 May 19, 2022
Official implementation of Self-supervised Image-to-text and Text-to-image Synthesis

Self-supervised Image-to-text and Text-to-image Synthesis This is the official implementation of Self-supervised Image-to-text and Text-to-image Synth

6 Jul 31, 2022
Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Hesper 63 Jan 05, 2023
Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021) by Qiming Hu, Xiaojie Guo. Dependencies P

Qiming Hu 31 Dec 20, 2022
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
Light-SERNet: A lightweight fully convolutional neural network for speech emotion recognition

Light-SERNet This is the Tensorflow 2.x implementation of our paper "Light-SERNet: A lightweight fully convolutional neural network for speech emotion

Arya Aftab 29 Nov 12, 2022
Offical code for the paper: "Growing 3D Artefacts and Functional Machines with Neural Cellular Automata" https://arxiv.org/abs/2103.08737

Growing 3D Artefacts and Functional Machines with Neural Cellular Automata Video of more results: https://www.youtube.com/watch?v=-EzztzKoPeo Requirem

Robotics Evolution and Art Lab 51 Jan 01, 2023
Time-Optimal Planning for Quadrotor Waypoint Flight

Time-Optimal Planning for Quadrotor Waypoint Flight This is an example implementation of the paper "Time-Optimal Planning for Quadrotor Waypoint Fligh

Robotics and Perception Group 38 Dec 02, 2022
What can linearized neural networks actually say about generalization?

What can linearized neural networks actually say about generalization? This is the source code to reproduce the experiments of the NeurIPS 2021 paper

gortizji 11 Dec 09, 2022
FAMIE is a comprehensive and efficient active learning (AL) toolkit for multilingual information extraction (IE)

FAMIE: A Fast Active Learning Framework for Multilingual Information Extraction

18 Sep 01, 2022
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 06, 2023
(JMLR'19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License PyOD is a comprehensive and sca

Yue Zhao 6.6k Jan 03, 2023
Deep Distributed Control of Port-Hamiltonian Systems

De(e)pendable Distributed Control of Port-Hamiltonian Systems (DeepDisCoPH) This repository is associated to the paper [1] and it contains: The full p

Dependable Control and Decision group - EPFL 3 Aug 17, 2022
Official implementation of the Neurips 2021 paper Searching Parameterized AP Loss for Object Detection.

Parameterized AP Loss By Chenxin Tao, Zizhang Li, Xizhou Zhu, Gao Huang, Yong Liu, Jifeng Dai This is the official implementation of the Neurips 2021

46 Jul 06, 2022
[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

Xiefan Guo 122 Dec 11, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Accompanying code for the paper Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Kevin Wilkinghoff 6 Dec 01, 2022
AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE

ALPHA 2.5: Frostbite Revival (Released 12/23/21) Changelog: [ UI ] Chained design. All steps link to one another! Use the master override toggles to s

87 Nov 16, 2022