Retrieval.pytorch - The code we used in [2020 DIGIX]

Overview

retrieval.pytorch

dependence

  • python3
  • pytorch
  • numpy
  • scikit-learn
  • tqdm
  • yacs

You can install yacs by pip. Other dependencies can be installed by 'conda'.

prepare dataset

first, you need to download the dataset from here. Then, you can move them into the directory $DATASET and decompress them by

unzip train_data.zip
unzip test_data_A.zip
unzip test_data_B.zip

Then remove the empty directory in train_data:

cd train_data
rm -rf DIGIX_001453
rm -rf DIGIX_001639
rm -rf DIGIX_002284

Finally, you need to edit the file src/dataset/datasets.py and set the correct values for traindir, test_A_dir, test_B_dir.

traindir = '$DATASET/train_data'
test_A_dir = '$DATASET/test_data_A'
test_B_dir = '$DATASET/test_data_B'

Train the network to extract feature

You can train dla102x and resnet101 by the below comands.

python experiments/DIGIX/dla102x/cgd_margin_loss.py
python experiments/DIGIX/resnet101/cgd_margin_loss.py

To train fishnet99, hrnet_w18 and hrnet_w30, you need to download their imagenet pretrained weights from here. Specifically, download fishnet99_ckpt.tar for fishnet99, download hrnetv2_w18_imagenet_pretrained.pth for hrnet_w18, download hrnetv2_w30_imagenet_pretrained.pth for hrnet_w30. Then you need to move these weights to ~/.cache/torch/hub/checkpoints to make sure torch.hub.load_state_dict_from_url can find them.

Then, you can train fishnet99, hrnet_w18, hrnet_w30 by

python experiments/DIGIX/fishnet99/cgd_margin_loss.py
python experiments/DIGIX/hrnet_w18/cgd_margin_loss.py
python experiments/DIGIX/hrnet_w30/cgd_margin_loss.py

After Training, the model weights can be found in results/DIGIX/{model}/cgd_margin_loss/{time}/transient/checkpoint.final.ckpt. We also provide these weights file.

extract features for retrieval

You can download the pretrained model from here and move them to pretrained directory.

Then, run the below comands.

python experiments/DIGIX_test_B/dla102x/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/resnet101/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/fishnet99/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/hrnet_w18/cgd_margin_loss_test_B.py
python experiments/DIGIX_test_B/hrnet_w30/cgd_margin_loss_test_B.py

When finished, the query feature for test_data_B can be found in results/DIGIX_test_B/{model}/cgd_margin_loss_test_B/{time}/query_feat. And the gallery feature can be found in results/DIGIX_test_B/{model}/cgd_margin_loss_test_B/{time}/gallery_feat.

Post process

You can download features from here. Then, you can put it into the directory features and decompress the files by

tar -xvf DIGIX_test_B_dla102x_5088.tar
tar -xvf DIGIX_test_B_fishnet99_5153.tar
tar -xvf DIGIX_test_B_hrnet_w18_5253.tar
tar -xvf DIGIX_test_B_hrnet_w30_5308.tar
tar -xvf DIGIX_test_B_resnet101_5059.tar

Then the features directory will be organized like this:

|-- DIGIX_test_B_dla102x_5088.tar  
|-- DIGIX_test_B_fishnet99_5153.tar  
|-- DIGIX_test_B_hrnet_w18_5253.tar  
|-- DIGIX_test_B_hrnet_w30_5308.tar  
|-- DIGIX_test_B_resnet101_5059.tar 
|-- DIGIX_test_B_dla102x_5088  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_fishnet99_5153  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_hrnet_w18_5253  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_hrnet_w30_5308  
| |-- gallery_feat  
| |-- query_feat  
|-- DIGIX_test_B_resnet101_5059  
| |-- gallery_feat  
| |-- query_feat  

Now, post process can be executed by

python post_process/rank.py --gpu 0 features/DIGIX_test_B_fishnet99_5153 features/DIGIX_test_B_dla102x_5088 features/DIGIX_test_B_hrnet_w18_5253 features/DIGIX_test_B_hrnet_w30_5308 features/DIGIX_test_B_resnet101_5059
Owner
Guo-Hua Wang
Guo-Hua Wang
Canonical Capsules: Unsupervised Capsules in Canonical Pose (NeurIPS 2021)

Canonical Capsules: Unsupervised Capsules in Canonical Pose (NeurIPS 2021) Introduction This is the official repository for the PyTorch implementation

165 Dec 07, 2022
Application of the L2HMC algorithm to simulations in lattice QCD.

l2hmc-qcd 📊 Slides Recent talk on Training Topological Samplers for Lattice Gauge Theory from the Machine Learning for High Energy Physics, on and of

Sam Foreman 37 Dec 14, 2022
Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods."

pv_predict_unet-lstm Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods." IEEE Transactions

FolkScientistInDL 8 Oct 08, 2022
Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far Can We Go?" submitted to TOSEM

tosem2021-personality-rep-package Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far

Collaborative Development Group 1 Dec 13, 2021
Contrastive Feature Loss for Image Prediction

Contrastive Feature Loss for Image Prediction We provide a PyTorch implementation of our contrastive feature loss presented in: Contrastive Feature Lo

Alex Andonian 44 Oct 05, 2022
Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

CGTransformer Code for our AAAI 2022 paper "Contrastive-Geometry Transformer network for Generalized 3D Pose Transfer" Contrastive-Geometry Transforme

18 Jun 28, 2022
CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching(CVPR2021)

CFNet(CVPR 2021) This is the implementation of the paper CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching, CVPR 2021, Zhelun Shen, Yuch

106 Dec 28, 2022
Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding (CVPR2022)

Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding by Qiaole Dong*, Chenjie Cao*, Yanwei Fu Paper and Supple

Qiaole Dong 190 Dec 27, 2022
Machine Learning automation and tracking

The Open-Source MLOps Orchestration Framework MLRun is an open-source MLOps framework that offers an integrative approach to managing your machine-lea

873 Jan 04, 2023
This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

1 Oct 25, 2021
ICCV2021 - A New Journey from SDRTV to HDRTV.

ICCV2021 - A New Journey from SDRTV to HDRTV.

XyChen 82 Dec 27, 2022
An AutoML Library made with Optuna and PyTorch Lightning

An AutoML Library made with Optuna and PyTorch Lightning Installation Recommended pip install -U gradsflow From source pip install git+https://github.

GradsFlow 294 Dec 17, 2022
基于AlphaPose的TensorRT加速

1. Requirements CUDA 11.1 TensorRT 7.2.2 Python 3.8.5 Cython PyTorch 1.8.1 torchvision 0.9.1 numpy 1.17.4 (numpy版本过高会出报错 this issue ) python-package s

52 Dec 06, 2022
Dynamical Wasserstein Barycenters for Time Series Modeling

Dynamical Wasserstein Barycenters for Time Series Modeling This is the code related for the Dynamical Wasserstein Barycenter model published in Neurip

8 Sep 09, 2022
MCMC samplers for Bayesian estimation in Python, including Metropolis-Hastings, NUTS, and Slice

Sampyl May 29, 2018: version 0.3 Sampyl is a package for sampling from probability distributions using MCMC methods. Similar to PyMC3 using theano to

Mat Leonard 304 Dec 25, 2022
PyTorch common framework to accelerate network implementation, training and validation

pytorch-framework PyTorch common framework to accelerate network implementation, training and validation. This framework is inspired by works from MML

Dongliang Cao 3 Dec 19, 2022
FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation [Project] [Paper] [arXiv] [Home] Official implementation of FastFCN:

Wu Huikai 815 Dec 29, 2022
Block Sparse movement pruning

Movement Pruning: Adaptive Sparsity by Fine-Tuning Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; ho

Hugging Face 54 Dec 20, 2022
SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement

SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement This repository implements the approach described in SporeAgent: Reinforced

Dominik Bauer 5 Jan 02, 2023
Certifiable Outlier-Robust Geometric Perception

Certifiable Outlier-Robust Geometric Perception About This repository holds the implementation for certifiably solving outlier-robust geometric percep

83 Dec 31, 2022