Pytorch implementation for "Large-Scale Long-Tailed Recognition in an Open World" (CVPR 2019 ORAL)

Overview

Large-Scale Long-Tailed Recognition in an Open World

[Project] [Paper] [Blog]

Overview

Open Long-Tailed Recognition (OLTR) is the author's re-implementation of the long-tail recognizer described in:
"Large-Scale Long-Tailed Recognition in an Open World"
Ziwei Liu*Zhongqi Miao*Xiaohang ZhanJiayun WangBoqing GongStella X. Yu  (CUHK & UC Berkeley / ICSI)  in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019, Oral Presentation

Further information please contact Zhongqi Miao and Ziwei Liu.

Update notifications

  • 03/04/2020: We changed all valirables named selfatt to modulatedatt so that the attention module can be properly trained in the second stage for Places-LT. ImageNet-LT does not have this problem since the weights are not freezed. We have updated new results using fixed code, which is still better than reported. The weights are also updated. Thanks!
  • 02/11/2020: We updated configuration files for Places_LT dataset. The current results are a little bit higher than reported, even with updated F-measure calculation. One important thing to be considered is that we have unfrozon the model weights for the first stage training of Places-LT, which means it is not suitable for single-GPU training in most cases (we used 4 1080ti in our implementation). However, for the second stage, since the memory and center loss do not support multi-GPUs currently, please switch back to single-GPU training. Thank you very much!
  • 01/29/2020: We updated the False Positive calculation in util.py so that the numbers are normal again. The reported F-measure numbers in the paper might be a little bit higher than actual numbers for all baselines. We will update it as soon as possible. We have updated the new F-measure number in the following table. Thanks.
  • 12/19/2019: Updated modules with 'clone()' methods and set use_fc in ImageNet-LT stage-1 config to False. Currently, the results for ImageNet-LT is comparable to reported numbers in the paper (a little bit better), and the reproduced results are updated below. We also found the bug in Places-LT. We will update the code and reproduced results as soon as possible.
  • 08/05/2019: Fixed a bug in utils.py. Update re-implemented ImageNet-LT weights at the end of this page.
  • 05/02/2019: Fixed a bug in run_network.py so the models train properly. Update configuration file for Imagenet-LT stage 1 training so that the results from the paper can be reproduced.

Requirements

Data Preparation

NOTE: Places-LT dataset have been updated since the first version. Please download again if you have the first version.

  • First, please download the ImageNet_2014 and Places_365 (256x256 version). Please also change the data_root in main.py accordingly.

  • Next, please download ImageNet-LT and Places-LT from here. Please put the downloaded files into the data directory like this:

data
  |--ImageNet_LT
    |--ImageNet_LT_open
    |--ImageNet_LT_train.txt
    |--ImageNet_LT_test.txt
    |--ImageNet_LT_val.txt
    |--ImageNet_LT_open.txt
  |--Places_LT
    |--Places_LT_open
    |--Places_LT_train.txt
    |--Places_LT_test.txt
    |--Places_LT_val.txt
    |--Places_LT_open.txt

Download Caffe Pre-trained Models for Places_LT Stage_1 Training

  • Caffe pretrained ResNet152 weights can be downloaded from here, and save the file to ./logs/caffe_resnet152.pth

Getting Started (Training & Testing)

ImageNet-LT

  • Stage 1 training:
python main.py --config ./config/ImageNet_LT/stage_1.py
  • Stage 2 training:
python main.py --config ./config/ImageNet_LT/stage_2_meta_embedding.py
  • Close-set testing:
python main.py --config ./config/ImageNet_LT/stage_2_meta_embedding.py --test
  • Open-set testing (thresholding)
python main.py --config ./config/ImageNet_LT/stage_2_meta_embedding.py --test_open
  • Test on stage 1 model
python main.py --config ./config/ImageNet_LT/stage_1.py --test

Places-LT

  • Stage 1 training (At this stage, multi-GPU might be necessary since we are finetuning a ResNet-152.):
python main.py --config ./config/Places_LT/stage_1.py
  • Stage 2 training (At this stage, only single-GPU is supported, please switch back to single-GPU training.):
python main.py --config ./config/Places_LT/stage_2_meta_embedding.py
  • Close-set testing:
python main.py --config ./config/Places_LT/stage_2_meta_embedding.py --test
  • Open-set testing (thresholding)
python main.py --config ./config/Places_LT/stage_2_meta_embedding.py --test_open

Reproduced Benchmarks and Model Zoo (Updated on 03/05/2020)

ImageNet-LT Open-Set Setting

Backbone Many-Shot Medium-Shot Few-Shot F-Measure Download
ResNet-10 44.2 35.2 17.5 44.6 model

Places-LT Open-Set Setting

Backbone Many-Shot Medium-Shot Few-Shot F-Measure Download
ResNet-152 43.7 40.2 28.0 50.0 model

CAUTION

The current code was prepared using single GPU. The use of multi-GPU can cause problems except for the first stage of Places-LT.

License and Citation

The use of this software is released under BSD-3.

@inproceedings{openlongtailrecognition,
  title={Large-Scale Long-Tailed Recognition in an Open World},
  author={Liu, Ziwei and Miao, Zhongqi and Zhan, Xiaohang and Wang, Jiayun and Gong, Boqing and Yu, Stella X.},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019}
}
Owner
Zhongqi Miao
Zhongqi Miao
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
Implementation of "A MLP-like Architecture for Dense Prediction"

A MLP-like Architecture for Dense Prediction (arXiv) Updates (22/07/2021) Initial release. Model Zoo We provide CycleMLP models pretrained on ImageNet

Shoufa Chen 244 Dec 27, 2022
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification [NeurIPS 2021] Abstract Multiple instance learn

132 Dec 30, 2022
Wordle-solver - Wordle answer generation program in python

🟨 Wordle Solver 🟩 Wordle answer generation program in python ✔️ Requirements U

Dahyun Kang 4 May 28, 2022
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
ObsPy: A Python Toolbox for seismology/seismological observatories.

ObsPy is an open-source project dedicated to provide a Python framework for processing seismological data. It provides parsers for common file formats

ObsPy 979 Jan 07, 2023
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
Official implementation of Few-Shot and Continual Learning with Attentive Independent Mechanisms

Few-Shot and Continual Learning with Attentive Independent Mechanisms This repository is the official implementation of Few-Shot and Continual Learnin

Chikan_Huang 25 Dec 08, 2022
Code repository for EMNLP 2021 paper 'Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods'

Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods This is the code repository to accompany the EMNLP 2021 paper on ad

Peru Bhardwaj 7 Sep 25, 2022
The codes and models in 'Gaze Estimation using Transformer'.

GazeTR We provide the code of GazeTR-Hybrid in "Gaze Estimation using Transformer". We recommend you to use data processing codes provided in GazeHub.

65 Dec 27, 2022
Patches desktop steam to look like the new steamdeck ui.

steam_deck_ui_patch The Deck UI patch will patch the regular desktop steam to look like the brand new SteamDeck UI. This patch tool currently works on

The_IT_Dude 3 Aug 29, 2022
This is the official implementation of "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval".

CORA This is the official implementation of the following paper: Akari Asai, Xinyan Yu, Jungo Kasai and Hannaneh Hajishirzi. One Question Answering Mo

Akari Asai 59 Dec 28, 2022
Minimal deep learning library written from scratch in Python, using NumPy/CuPy.

SmallPebble Project status: experimental, unstable. SmallPebble is a minimal/toy automatic differentiation/deep learning library written from scratch

Sidney Radcliffe 92 Dec 30, 2022
Auto grind btdb2 exp for tower

Bloons TD Battles 2 EXP Grinder Auto grind btdb2 exp for towers Setup I suggest checking out every screenshot to see what they are supposed to be, so

Vincent 6 Jul 29, 2022
Codebase for Diffusion Models Beat GANS on Image Synthesis.

Codebase for Diffusion Models Beat GANS on Image Synthesis.

Katherine Crowson 128 Dec 02, 2022
Face recognition. Redefined.

FaceFinder Use a powerful CNN to identify faces in images! TABLE OF CONTENTS About The Project Built With Getting Started Prerequisites Installation U

BleepLogger 20 Jun 16, 2021
implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks

YOLOR implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks To reproduce the results in the paper, please us

Kin-Yiu, Wong 1.8k Jan 04, 2023
3D Human Pose Machines with Self-supervised Learning

3D Human Pose Machines with Self-supervised Learning Keze Wang, Liang Lin, Chenhan Jiang, Chen Qian, and Pengxu Wei, “3D Human Pose Machines with Self

Chenhan Jiang 398 Dec 20, 2022
PyTorch code for the "Deep Neural Networks with Box Convolutions" paper

Box Convolution Layer for ConvNets Single-box-conv network (from `examples/mnist.py`) learns patterns on MNIST What This Is This is a PyTorch implemen

Egor Burkov 515 Dec 18, 2022