Official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

Overview

Test-Agnostic Long-Tailed Recognition

This repository is the official Pytorch implementation of Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision.

  • TADE (our method) innovates the expert training scheme by introducing diversity-promoting expertise-guided losses, which train different experts to handle distinct class distributions. In this way, the learned experts would be more diverse than existing multi-expert methods, leading to better ensemble performance, and aggregatedly simulate a wide spectrum of possible class distributions.
  • TADE develops a new self-supervised method, namely prediction stability maximization, to adaptively aggregate these experts for better handling unknown test distribution, using unlabeled test class data.

Results

ImageNet-LT (ResNeXt-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc. Model
Softmax 4.26 48.0
RIDE 6.08 56.3
TADE (ours) 6.08 58.8 Download

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 4.26 66.1 60.3 48.0 34.9 27.6
RIDE 6.08 67.6 64.0 56.3 48.7 44.0
TADE (ours) 6.08 69.4 65.4 58.8 54.5 53.1

CIFAR100-Imbalance ratio 100 (ResNet-32)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 0.07 41.4
RIDE 0.11 48.0
TADE (ours) 0.11 49.8

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 0.07 62.3 56.2 41.4 25.8 17.5
RIDE 0.11 63.0 57.0 48.0 35.4 29.3
TADE (ours) 0.11 65.9 58.3 49.8 43.9 42.4

Places-LT (ResNet-152)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 11.56 31.4
RIDE 13.18 40.3
TADE (ours) 13.18 40.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 11.56 45.6 40.2 31.4 23.4 19.4
RIDE 13.18 43.1 41.6 40.3 38.2 36.9
TADE (ours) 13.18 46.4 43.3 40.9 41.4 41.6

iNaturalist 2018 (ResNet-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 4.14 64.7
RIDE 5.80 71.8
TADE (ours) 5.80 72.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-3 Forward-2 Uniform Backward-2 Backward-3
Softmax 4.14 65.4 65.5 64.7 64.0 63.4
RIDE 5.80 71.5 71.9 71.8 71.9 71.8
TADE (ours) 5.80 72.3 72.5 72.9 73.5 73.3

Requirements

  • To install requirements:
pip install -r requirements.txt

Hardware requirements

8 GPUs with >= 11G GPU RAM are recommended. Otherwise the model with more experts may not fit in, especially on datasets with more classes (the FC layers will be large). We do not support CPU training, but CPU inference could be supported by slight modification.

Datasets

Four bechmark datasets

  • Please download these datasets and put them to the /data file.
  • ImageNet-LT and Places-LT can be found at here.
  • iNaturalist data should be the 2018 version from here.
  • CIFAR-100 will be downloaded automatically with the dataloader.
data
├── ImageNet_LT
│   ├── test
│   ├── train
│   └── val
├── CIFAR100
│   └── cifar-100-python
├── Place365
│   ├── data_256
│   ├── test_256
│   └── val_256
└── iNaturalist 
    ├── test2018
    └── train_val2018

Txt files

  • We provide txt files for test-agnostic long-tailed recognition for ImageNet-LT, Places-LT and iNaturalist 2018. CIFAR-100 will be generated automatically with the code.
  • For iNaturalist 2018, please unzip the iNaturalist_train.zip.
data_txt
├── ImageNet_LT
│   ├── ImageNet_LT_backward2.txt
│   ├── ImageNet_LT_backward5.txt
│   ├── ImageNet_LT_backward10.txt
│   ├── ImageNet_LT_backward25.txt
│   ├── ImageNet_LT_backward50.txt
│   ├── ImageNet_LT_forward2.txt
│   ├── ImageNet_LT_forward5.txt
│   ├── ImageNet_LT_forward10.txt
│   ├── ImageNet_LT_forward25.txt
│   ├── ImageNet_LT_forward50.txt
│   ├── ImageNet_LT_test.txt
│   ├── ImageNet_LT_train.txt
│   ├── ImageNet_LT_uniform.txt
│   └── ImageNet_LT_val.txt
├── Places_LT_v2
│   ├── Places_LT_backward2.txt
│   ├── Places_LT_backward5.txt
│   ├── Places_LT_backward10.txt
│   ├── Places_LT_backward25.txt
│   ├── Places_LT_backward50.txt
│   ├── Places_LT_forward2.txt
│   ├── Places_LT_forward5.txt
│   ├── Places_LT_forward10.txt
│   ├── Places_LT_forward25.txt
│   ├── Places_LT_forward50.txt
│   ├── Places_LT_test.txt
│   ├── Places_LT_train.txt
│   ├── Places_LT_uniform.txt
│   └── Places_LT_val.txt
└── iNaturalist18
    ├── iNaturalist18_backward2.txt
    ├── iNaturalist18_backward3.txt
    ├── iNaturalist18_forward2.txt
    ├── iNaturalist18_forward3.txt
    ├── iNaturalist18_train.txt
    ├── iNaturalist18_uniform.txt
    └── iNaturalist18_val.txt 

Pretrained models

  • For the training on Places-LT, we follow previous method and use the pre-trained model.
  • Please download the checkpoint. Unzip and move the checkpoint files to /model/pretrained_model_places/.

Script

ImageNet-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_imagenet_lt_resnext50_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_imagenet.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_imagenet.py -c configs/test_time_imagenet_lt_resnext50_tade.json -r checkpoint_path

CIFAR100-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_cifar100_ir100_tade.json
  • One can change the imbalance ratio from 100 to 10/50 by changing the config file.

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_cifar.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_cifar.py -c configs/test_time_cifar100_ir100_tade.json -r checkpoint_path
  • One can change the imbalance ratio from 100 to 10/50 by changing the config file.

Places-LT

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_places_lt_resnet152_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test_places.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_places.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_places.py -c configs/test_time_places_lt_resnet152_tade.json -r checkpoint_path

iNaturalist 2018

Training

  • To train the expertise-diverse model, run this command:
python train.py -c configs/config_iNaturalist_resnet50_tade.json

Evaluate

  • To evaluate expertise-diverse model on the uniform test class distribution, run:
python test.py -r checkpoint_path
  • To evaluate expertise-diverse model on agnostic test class distributions, run:
python test_all_inat.py -r checkpoint_path

Test-time training

  • To test-time train the expertise-diverse model for agnostic test class distributions, run:
python test_train_inat.py -c configs/test_time_iNaturalist_resnet50_tade.json -r checkpoint_path

Citation

If you find our work inspiring or use our codebase in your research, please cite our work.

@article{zhang2021test,
  title={Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision},
  author={Zhang, Yifan and Hooi, Bryan and Hong, Lanqing and Feng, Jiashi},
  journal={arXiv},
  year={2021}
}

Acknowledgements

This is a project based on this pytorch template.

The mutli-expert framework are based on RIDE. The data generation of agnostic test class distributions takes references from LADE.

Owner
vanint
vanint
Need: Image Search With Python

Need: Image Search The problem is that a user needs to search for a specific ima

Surya Komandooru 1 Dec 30, 2021
Contains links to publicly available datasets for modeling health outcomes using speech and language.

speech-nlp-datasets Contains links to publicly available datasets for modeling various health outcomes using speech and language. Speech-based Corpora

Tuka Alhanai 77 Dec 07, 2022
A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021

Chimera: Learning Shared Semantic Space for Speech-to-Text Translation This is a Pytorch implementation for the "Chimera" paper Learning Shared Semant

Chi Han 43 Dec 28, 2022
A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

Ethan 66 Dec 26, 2022
Lumped-element impedance calculator and frequency-domain plotter.

fastZ: Lumped-Element Impedance Calculator fastZ is a small tool for calculating and visualizing electrical impedance in Python. Features include: Sup

Wesley Hileman 47 Nov 18, 2022
HAN2HAN : Hangul Font Generation

HAN2HAN : Hangul Font Generation

Changwoo Lee 36 Dec 28, 2022
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

flair 12.3k Dec 31, 2022
A simple Streamlit App to classify swahili news into different categories.

Swahili News Classifier Streamlit App A simple app to classify swahili news into different categories. Installation Install all streamlit requirements

Davis David 4 May 01, 2022
문장단위로 분절된 나무위키 데이터셋. Releases에서 다운로드 받거나, tfds-korean을 통해 다운로드 받으세요.

Namuwiki corpus 문장단위로 미리 분절된 나무위키 코퍼스. 목적이 LM등에서 사용하기 위한 데이터셋이라, 링크/이미지/테이블 등등이 잘려있습니다. 문장 단위 분절은 kss를 활용하였습니다. 라이선스는 나무위키에 명시된 바와 같이 CC BY-NC-SA 2.0

Jeong Ukjae 16 Apr 02, 2022
Watson Natural Language Understanding and Knowledge Studio

Material de demonstração dos serviços: Watson Natural Language Understanding e Knowledge Studio Visão Geral: https://www.ibm.com/br-pt/cloud/watson-na

Vanderlei Munhoz 4 Oct 24, 2021
Let Xiao Ai speakers control third-party devices

A stupid way to extend miot/xiaoai. Demo for Panasonic Bath Bully FV-RB20VL1 逆向 Panasonic Smart China,获得控制浴霸的请求信息(HTTP 请求),详见 apps/panasonic.py; 2. 通过

bin 14 Jul 07, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 05, 2023
Task-based datasets, preprocessing, and evaluation for sequence models.

SeqIO: Task-based datasets, preprocessing, and evaluation for sequence models. SeqIO is a library for processing sequential data to be fed into downst

Google 290 Dec 26, 2022
ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体

ChainKnowledgeGraph, 产业链知识图谱包括A股上市公司、行业和产品共3类实体,包括上市公司所属行业关系、行业上级关系、产品上游原材料关系、产品下游产品关系、公司主营产品、产品小类共6大类。 上市公司4,654家,行业511个,产品95,559条、上游材料56,824条,上级行业480条,下游产品390条,产品小类52,937条,所属行业3,946条。

liuhuanyong 415 Jan 06, 2023
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 884 Nov 11, 2022
A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.

Basic-UI-for-GPT-J-6B-with-low-vram A repository to run GPT-J-6B on low vram systems by using both ram, vram and pinned memory. There seem to be some

90 Dec 25, 2022
This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project

Common Voice Utils This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project. It aims t

Francis Tyers 40 Dec 20, 2022
(ACL 2022) The source code for the paper "Towards Abstractive Grounded Summarization of Podcast Transcripts"

Towards Abstractive Grounded Summarization of Podcast Transcripts We provide the source code for the paper "Towards Abstractive Grounded Summarization

10 Jul 01, 2022
A list of NLP(Natural Language Processing) tutorials

NLP Tutorial A list of NLP(Natural Language Processing) tutorials built on PyTorch. Table of Contents A step-by-step tutorial on how to implement and

Allen Lee 1.3k Dec 25, 2022