This repository contains code for the paper "Decoupling Representation and Classifier for Long-Tailed Recognition", published at ICLR 2020

Overview

Classifier-Balancing

This repository contains code for the paper:

Decoupling Representation and Classifier for Long-Tailed Recognition
Bingyi Kang, Saining Xie,Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, Yannis Kalantidis
[OpenReview] [Arxiv] [PDF] [Slides] [@ICLR]
Facebook AI Research, National University of Singapore
International Conference on Learning Representations (ICLR), 2020

Abstract

The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but all of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability with relative ease by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.

 

 

If you find this code useful, consider citing our work:

@inproceedings{kang2019decoupling,
  title={Decoupling representation and classifier for long-tailed recognition},
  author={Kang, Bingyi and Xie, Saining and Rohrbach, Marcus and Yan, Zhicheng
          and Gordo, Albert and Feng, Jiashi and Kalantidis, Yannis},
  booktitle={Eighth International Conference on Learning Representations (ICLR)},
  year={2020}
}

Requirements

The code is based on https://github.com/zhmiao/OpenLongTailRecognition-OLTR.

Dataset

  • ImageNet_LT and Places_LT

    Download the ImageNet_2014 and Places_365.

  • iNaturalist 2018

    • Download the dataset following here.
    • cd data/iNaturalist18, Generate image name files with this script or use the existing ones [here].

Change the data_root in main.py accordingly.

Representation Learning

  1. Instance-balanced Sampling
python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml
  1. Class-balanced Sampling
python main.py --cfg ./config/ImageNet_LT/feat_balance.yaml
  1. Square-root Sampling
python main.py --cfg ./config/ImageNet_LT/feat_squareroot.yaml
  1. Progressively-balancing Sampling
python main.py --cfg ./config/ImageNet_LT/feat_shift.yaml

Test the joint learned classifier with representation learning

python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml --test 

Classifier Learning

  1. Nearest Class Mean classifier (NCM).
python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml --test --knn
  1. Classifier Re-training (cRT)
python main.py --cfg ./config/ImageNet_LT/cls_crt.yaml --model_dir ./logs/ImageNet_LT/models/resnext50_uniform_e90
python main.py --cfg ./config/ImageNet_LT/cls_crt.yaml --test
  1. Tau-normalization

Extract fatures

for split in train_split val test
do
  python main.py --cfg ./config/ImageNet_LT/feat_uniform.yaml --test --save_feat $split
done

Evaluation

for split in train val test
do
  python tau_norm.py --root ./logs/ImageNet_LT/models/resnext50_uniform_e90/ --type $split
done
  1. Learnable weight scaling (LWS)
python main.py --cfg ./config/ImageNet_LT/cls_lws.yaml --model_dir ./logs/ImageNet_LT/models/resnext50_uniform_e90
python main.py --cfg ./config/ImageNet_LT/cls_lws.yaml --test

Results and Models

ImageNet_LT

  • Representation learning

    Sampling Many Medium Few All Model
    Instance-Balanced 65.9 37.5 7.7 44.4 ResNeXt50
    Class-Balanced 61.8 40.1 15.5 45.1 ResNeXt50
    Square-Root 64.3 41.2 17.0 46.8 ResNeXt50
    Progressively-Balanced 61.9 43.2 19.4 47.2 ResNeXt50

    For other models trained with instance-balanced (natural) sampling:
    [ResNet50] [ResNet101] [ResNet152] [ResNeXt101] [ResNeXt152]

  • Classifier learning

    Classifier Many Medium Few All Model
    Joint 65.9 37.5 7.7 44.4 ResNeXt50
    NCM 56.6 45.3 28.1 47.3 ResNeXt50
    cRT 61.8 46.2 27.4 49.6 ResNeXt50
    Tau-normalization 59.1 46.9 30.7 49.4 ResNeXt50
    LWS 60.2 47.2 30.3 49.9 ResNeXt50

iNaturalist 2018

Places_LT

  • Representaion learning
    We provide a pretrained ResNet152 with instance-balanced (natural) sampling: [link]
  • Classifier learning
    We provide the cRT and LWS models based on above pretrained ResNet152 model as follows:
    [ResNet152(cRT)] [ResNet152(LWS)]

To test a pretrained model:
python main.py --cfg /path/to/config/file --model_dir /path/to/model/file --test

License

This project is licensed under the license found in the LICENSE file in the root directory of this source tree (here). Portions of the source code are from the OLTR project.

Owner
Facebook Research
Facebook Research
Code for the paper "Query Embedding on Hyper-relational Knowledge Graphs"

Query Embedding on Hyper-Relational Knowledge Graphs This repository contains the code used for the experiments in the paper Query Embedding on Hyper-

DimitrisAlivas 19 Jul 26, 2022
Extracting knowledge graphs from language models as a diagnostic benchmark of model performance.

Interpreting Language Models Through Knowledge Graph Extraction Idea: How do we interpret what a language model learns at various stages of training?

EPFL Machine Learning and Optimization Laboratory 9 Oct 25, 2022
for a paper about leveraging discourse markers for training new models

TSLM-DISCOURSE-MARKERS Scope This repository contains: (1) Code to extract discourse markers from wikipedia (TSA). (1) Code to extract significant dis

International Business Machines 6 Nov 02, 2022
Multi-Objective Reinforced Active Learning

Multi-Objective Reinforced Active Learning Dependencies wandb tqdm pytorch = 1.7.0 numpy = 1.20.0 scipy = 1.1.0 pycolab == 1.2 Weights and Biases O

Markus Peschl 6 Nov 19, 2022
Audio2Face - Audio To Face With Python

Audio2Face Discription We create a project that transforms audio to blendshape w

FACEGOOD 724 Dec 26, 2022
StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN Pytorch implementation Inception score evaluation StackGAN-v2-pytorch Tensorflow implementation for reproducing main results in the paper Sta

Han Zhang 1.8k Dec 21, 2022
Make differentially private training of transformers easy for everyone

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

DV Lab 116 Dec 20, 2022
SEC'21: Sparse Bitmap Compression for Memory-Efficient Training onthe Edge

Training Deep Learning Models on The Edge Training on the Edge enables continuous learning from new data for deployed neural networks on memory-constr

Brown University Scale Lab 4 Nov 18, 2022
A framework for using LSTMs to detect anomalies in multivariate time series data. Includes spacecraft anomaly data and experiments from the Mars Science Laboratory and SMAP missions.

Telemanom (v2.0) v2.0 updates: Vectorized operations via numpy Object-oriented restructure, improved organization Merge branches into single branch fo

Kyle Hundman 844 Dec 28, 2022
PyTorch code for our paper "Gated Multiple Feedback Network for Image Super-Resolution" (BMVC2019)

Gated Multiple Feedback Network for Image Super-Resolution This repository contains the PyTorch implementation for the proposed GMFN [arXiv]. The fram

Qilei Li 66 Nov 03, 2022
Julia and Matlab codes to simulated all problems in El-Hachem, McCue and Simpson (2021)

Substrate_Mediated_Invasion Julia and Matlab codes to simulated all problems in El-Hachem, McCue and Simpson (2021) 2DSolver.jl reproduces the simulat

Matthew Simpson 0 Nov 09, 2021
Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for the task of Visual Document Understanding (VDU)

DocFormer - PyTorch Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for t

171 Jan 06, 2023
Towards Debiasing NLU Models from Unknown Biases

Towards Debiasing NLU Models from Unknown Biases Abstract: NLU models often exploit biased features to achieve high dataset-specific performance witho

Ubiquitous Knowledge Processing Lab 22 Jun 14, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
PyTorch code for SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised DA

PyTorch Code for SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation Viraj Prabhu, Shivam Khare, Deeks

Viraj Prabhu 46 Dec 24, 2022
ALL Snow Removed: Single Image Desnowing Algorithm Using Hierarchical Dual-tree Complex Wavelet Representation and Contradict Channel Loss (HDCWNet)

ALL Snow Removed: Single Image Desnowing Algorithm Using Hierarchical Dual-tree Complex Wavelet Representation and Contradict Channel Loss (HDCWNet) (

Wei-Ting Chen 49 Dec 27, 2022
BOVText: A Large-Scale, Multidimensional Multilingual Dataset for Video Text Spotting

BOVText: A Large-Scale, Bilingual Open World Dataset for Video Text Spotting Updated on December 10, 2021 (Release all dataset(2021 videos)) Updated o

weijiawu 47 Dec 26, 2022
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Hugging Face 77.2k Jan 02, 2023
Segmentation models with pretrained backbones. PyTorch.

Python library with Neural Networks for Image Segmentation based on PyTorch. The main features of this library are: High level API (just two lines to

Pavel Yakubovskiy 6.6k Jan 06, 2023