The 2nd place solution of 2021 google landmark retrieval on kaggle.

Overview

Google_Landmark_Retrieval_2021_2nd_Place_Solution

The 2nd place solution of 2021 google landmark retrieval on kaggle.

Environment

We use cuda 11.1/python 3.7/torch 1.9.1/torchvision 0.8.1 for training and testing.

Download imagenet pretrained model ResNeXt101ibn and SEResNet101ibn from IBN-Net. ResNest101 and ResNeSt269 can be found in ResNest.

Prepare data

  1. Download GLDv2 full version from the official site.

  2. Run python tools/generate_gld_list.py. This will generate clean, c2x, trainfull and all data for different stage of training.

  3. Validation annotation comes from all 1129 images in GLDv2. We expand the competition index set to index_expand. Each query could find all its GTs in the expanded index set and the validation could be more accurate.

Train

We use 8 GPU (32GB/16GB) for training. The evaluation metric in landmark retrieval is different from person re-identification. Due to the validation scale, we skip the validation stage during training and just use the model from last epoch for evaluation.

Fast Train Script

To make quick experiments, we provide scripts for R50_256 trained for clean subset. This setting trains very fast and is helpful for debug.

python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/R50_256.yml

Whole Train Pipeline

The whole training pipeline for SER101ibn backbone is listed below. Other backbones and input size can be modified accordingly.

python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_384.yml
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_384_finetune.yml
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_512_finetune.yml
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 55555 --max_restarts 0 train.py --config_file configs/GLDv2/SER101ibn_512_all.yml

Inference(notebooks)

  • With four models trained, cd to submission/code/ and modify settings in landmark_retrieval.py properly.

  • Then run eval_retrieval.sh to get submission file and evaluate on validation set offline.

General Settings

REID_EXTRACT_FLAG: Skip feature extraction when using offline code.
FEAT_DIR: Save cached features.
IMAGE_DIR: competition image dir. We make a soft link for competition data at submission/input/landmark-retrieval-2021/
RAW_IMAGE_DIR: origin GLDv2 dir
MODEL_DIR: the latest models for submission
META_DIR: saves meta files for rerank purpose
LOCAL_MATCHING and KR_FLAG disabled for our submission.

Fast Inference Script

Use R50_256 model trained from clean subset correspongding to the fast train script. Set CATEGORY_RERANK and REF_SET_EXTRACT to False. You will get about mAP=32.84% for the validation set.

Whole Inference Pipeline

  • Copy cache_all_list.pkl, cache_index_train_list.pkl and cache_full_list.pkl from cache to submission/input/meta-data-final

  • Set REF_SET_EXTRACT to True to extract features for all images of GLDv2. This will save about 4.9 million 512 dim features for each model in submission/input/meta-data-final.

  • Set REF_SET_EXTRACT to False and CATEGORY_RERANK to before_merge. This will load the precomputed features and run the proposed Landmark-Country aware rerank.

  • The notebooks on kaggle is exactly the same file as in base_landmark.py and landmark_retrieval.py. We also upload the same notebooks as in kaggle in kaggle.ipynb.

Kaggle and ICCV workshops

  • The challenge is held on kaggle and the leaderboard can be found here. We rank 2nd(2/263) in this challenge.

  • Kaggle Discussion post link here

  • ICCV workshop slides coming soon.

Thanks

The code is motivated by AICITY2021_Track2_DMT, 2020_1st_recognition_solution, 2020_2nd_recognition_solution, 2020_1st_retrieval_solution.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{zhang2021landmark,
 title={2nd Place Solution to Google Landmark Retrieval 2021},
 author={Zhang, Yuqi and Xu, Xianzhe and Chen, Weihua and Wang, Yaohua and Zhang, Fangyi},
 year={2021}
}
Learning based AI for playing multi-round Koi-Koi hanafuda card games. Have fun.

Koi-Koi AI Learning based AI for playing multi-round Koi-Koi hanafuda card games. Platform Python PyTorch PySimpleGUI (for the interface playing vs AI

Sanghai Guan 10 Nov 20, 2022
Algorithmic trading with deep learning experiments

Deep-Trading Algorithmic trading with deep learning experiments. Now released part one - simple time series forecasting. I plan to implement more soph

Alex Honchar 1.4k Jan 02, 2023
A PyTorch Library for Accelerating 3D Deep Learning Research

Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research Overview NVIDIA Kaolin library provides a PyTorch API for working with a variety

NVIDIA GameWorks 3.5k Jan 07, 2023
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (चित्र) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

Aniket Maurya 210 Dec 21, 2022
Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters"

Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters" Pipeline of CLIP-Adapter CLIP-Adapter is a drop-in modul

peng gao 157 Dec 26, 2022
A more easy-to-use implementation of KPConv

A more easy-to-use implementation of KPConv This repo contains a more easy-to-use implementation of KPConv based on PyTorch. Introduction KPConv is a

Zheng Qin 35 Dec 14, 2022
This repository is an unoffical PyTorch implementation of Medical segmentation in 3D and 2D.

Pytorch Medical Segmentation Read Chinese Introduction:Here! Recent Updates 2021.1.8 The train and test codes are released. 2021.2.6 A bug in dice was

EasyCV-Ellis 618 Dec 27, 2022
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP ZhangYuan 719 Jan 02, 2023

Pytorch implementation of face attention network

Face Attention Network Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occ

Hooks 312 Dec 09, 2022
Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations

ODE GAN (Prototype) in PyTorch Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary

Somshubra Majumdar 15 Feb 10, 2022
Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions

Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions Usage Clone the code to local. https://github.com/tanlab/MI

Computational Biology and Machine Learning lab @ TOBB ETU 3 Oct 18, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior. The code will release soon. Implementation Python3 PyTorch=1.0 NVIDIA GPU+

FengZhang 34 Dec 04, 2022
Weighted QMIX: Expanding Monotonic Value Function Factorisation

This repo contains the cleaned-up code that was used in "Weighted QMIX: Expanding Monotonic Value Function Factorisation"

whirl 82 Dec 29, 2022
Fast Neural Representations for Direct Volume Rendering

Fast Neural Representations for Direct Volume Rendering Sebastian Weiss, Philipp Hermüller, Rüdiger Westermann This repository contains the code and s

Sebastian Weiss 20 Dec 03, 2022
Language models are open knowledge graphs ( non official implementation )

language-models-are-knowledge-graphs-pytorch Language models are open knowledge graphs ( work in progress ) A non official reimplementation of Languag

theblackcat102 132 Dec 18, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers

hierarchical-transformer-1d Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning using 🤗 transformers In Progress!! 2021.

MyungHoon Jin 7 Nov 06, 2022
Encode and decode text application

Text Encoder and Decoder Encode and decode text in many ways using this application! Encode in: ASCII85 Base85 Base64 Base32 Base16 Url MD5 Hash SHA-1

Alice 1 Feb 12, 2022
Contrastive Learning for Compact Single Image Dehazing, CVPR2021

AECR-Net Contrastive Learning for Compact Single Image Dehazing, CVPR2021. Official Pytorch based implementation. Paper arxiv Pytorch Version TODO: mo

glassy 253 Jan 01, 2023
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Taehoon Kim 1k Jan 04, 2023
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022