Implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning"

Related tags

Deep LearningSPPR
Overview

Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning

This is the implementation of the paper "Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning" (accepted to CVPR2021).

For more information, check out the paper on [arXiv].

Requirements

  • Python 3.8
  • PyTorch 1.8.1 (>1.1.0)
  • cuda 11.2

Preparing Few-Shot Class-Incremental Learning Datasets

Download following datasets:

1. CIFAR-100

Automatically downloaded on torchvision.

2. MiniImageNet

(1) Download MiniImageNet train/test images[github], and prepare related datasets according to [TOPIC].

(2) or Download processed data from our Google Drive: [mini-imagenet.zip], (and locate the entire folder under datasets/ directory).

3. CUB200

(1) Download CUB200 train/test images, and prepare related datasets according to [TOPIC]:

wget http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz

(2) or Download processed data from our Google Drive: [cub.zip], (and locate the entire folder under datasets/ directory).

Create a directory '../datasets' for the above three datasets and appropriately place each dataset to have following directory structure:

../                                                        # parent directory
├── ./                                           # current (project) directory
│   ├── log/                              # (dir.) running log
│   ├── pre/                              # (dir.) trained models for test.
│   ├── utils/                            # (dir.) implementation of paper 
│   ├── README.md                          # intstruction for reproduction
│   ├── test.sh                          # bash for testing.
│   ├── train.py                        # code for training model
│   └── train.sh                        # bash for training model
└── datasets/
    ├── CIFAR100/                      # CIFAR100 devkit
    ├── mini-imagenet/           
    │   ├── train/                         # (dir.) training images (from Google Drive)
    │   ├── test/                           # (dir.) testing images (from Google Drive)
    │   └── ..some csv files..
    └── cub/                                   # (dir.) contains 200 object classes
        ├── train/                             # (dir.) training images (from Google Drive)
        └── test/                               # (dir.) testing images (from Google Drive)

Training

Choose apporopriate lines in train.sh file.

sh train.sh
  • '--base_epochs' can be modified to control the initial accuracy ('Our' vs 'Our*' in our paper).
  • Training takes approx. several hours until convergence (trained with one 2080 Ti or 3090 GPUs).

Testing

1. Download pretrained models to the 'pre' folder.

Pretrained models are available on our [Google Drive].

2. Test

Choose apporopriate lines in train.sh file.

sh test.sh 

Main Results

The experimental results with 'test.sh 'for three datasets are shown below.

1. CIFAR-100

Model 1 2 3 4 5 6 7 8 9
iCaRL 64.10 53.28 41.69 34.13 27.93 25.06 20.41 15.48 13.73
TOPIC 64.10 56.03 47.89 42.99 38.02 34.60 31.67 28.35 25.86
Ours 63.97 65.86 61.31 57.6 53.39 50.93 48.27 45.36 43.32

2. MiniImageNet

Model 1 2 3 4 5 6 7 8 9
iCaRL 61.31 46.32 42.94 37.63 30.49 24.00 20.89 18.80 17.21
TOPIC 61.31 45.58 43.77 37.19 32.38 29.67 26.44 25.18 21.80
Ours 61.45 63.80 59.53 55.53 52.50 49.60 46.69 43.79 41.92

3. CUB200

Model 1 2 3 4 5 6 7 8 9 10 11
iCaRL 68.68 52.65 48.61 44.16 36.62 29.52 27.83 26.26 24.01 23.89 21.16
TOPIC 68.68 61.01 55.35 50.01 42.42 39.07 35.47 32.87 30.04 25.91 24.85
Ours 68.05 62.01 57.61 53.67 50.77 46.76 45.43 44.53 41.74 39.93 38.45

The presented results are slightly different from those in the paper, which are the average results of multiple tests.

BibTeX

If you use this code for your research, please consider citing:

@inproceedings{zhu2021self,
  title={Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning},
  author={Zhu, Kai and Cao, Yang and Zhai, Wei and Cheng, Jie and Zha, Zheng-Jun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6801--6810},
  year={2021}
}
Owner
Kai Zhu
Kai Zhu
PyTorch implementation of the Pose Residual Network (PRN)

Pose Residual Network This repository contains a PyTorch implementation of the Pose Residual Network (PRN) presented in our ECCV 2018 paper: Muhammed

Salih Karagoz 289 Nov 28, 2022
[AAAI-2022] Official implementations of MCL: Mutual Contrastive Learning for Visual Representation Learning

Mutual Contrastive Learning for Visual Representation Learning This project provides source code for our Mutual Contrastive Learning for Visual Repres

winycg 48 Jan 02, 2023
GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape Completion

GarmentNets This repository contains the source code for the paper GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape

Columbia Artificial Intelligence and Robotics Lab 43 Nov 21, 2022
Multiple-criteria decision-making (MCDM) with Electre, Promethee, Weighted Sum and Pareto

EasyMCDM - Quick Installation methods Install with PyPI Once you have created your Python environment (Python 3.6+) you can simply type: pip3 install

Labrak Yanis 6 Nov 22, 2022
Disentangled Face Attribute Editing via Instance-Aware Latent Space Search, accepted by IJCAI 2021.

Instance-Aware Latent-Space Search This is a PyTorch implementation of the following paper: Disentangled Face Attribute Editing via Instance-Aware Lat

67 Dec 21, 2022
Simple transformer model for CIFAR10

CIFAR-Transformer Simple transformer model for CIFAR10. Reference: https://www.tensorflow.org/text/tutorials/transformer https://github.com/huggingfac

9 Nov 07, 2022
vit for few-shot classification

Few-Shot ViT Requirements PyTorch (= 1.9) TorchVision timm (latest) einops tqdm numpy scikit-learn scipy argparse tensorboardx Pretrained Checkpoints

Martin Dong 26 Nov 30, 2022
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

Wei Ouyang 893 Dec 18, 2022
How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work

Deep Q-Learning Recommend papers The first step is to read and understand the method that you will implement. It was first introduced in a 2013 paper

1 Jan 25, 2022
Python Jupyter kernel using Poetry for reproducible notebooks

Poetry Kernel Use per-directory Poetry environments to run Jupyter kernels. No need to install a Jupyter kernel per Python virtual environment! The id

Pathbird 204 Jan 04, 2023
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
Official Pytorch implementation of the paper: "Locally Shifted Attention With Early Global Integration"

Locally-Shifted-Attention-With-Early-Global-Integration Pretrained models You can download all the models from here. Training Imagenet python -m torch

Shelly Sheynin 14 Apr 15, 2022
Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds

Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds Xinxin Zuo, Sen Wang, Minglun Gong, Li Cheng Prerequisites We have tested the code on Ubun

41 Dec 12, 2022
Poplar implementation of "Bundle Adjustment on a Graph Processor" (CVPR 2020)

Poplar Implementation of Bundle Adjustment using Gaussian Belief Propagation on Graphcore's IPU Implementation of CVPR 2020 paper: Bundle Adjustment o

Joe Ortiz 34 Dec 05, 2022
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022
A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron.

The GatedTabTransformer. A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron. C

Radi Cho 60 Dec 15, 2022
PyTorch for Semantic Segmentation

PyTorch for Semantic Segmentation This repository contains some models for semantic segmentation and the pipeline of training and testing models, impl

Zijun Deng 1.7k Jan 06, 2023
Exploring Relational Context for Multi-Task Dense Prediction [ICCV 2021]

Adaptive Task-Relational Context (ATRC) This repository provides source code for the ICCV 2021 paper Exploring Relational Context for Multi-Task Dense

David Brüggemann 35 Dec 05, 2022
Solutions and questions for AoC2021. Merry christmas!

Advent of Code 2021 Merry christmas! 🎄 🎅 To get solutions and approximate execution times for implementations, please execute the run.py script in t

Wilhelm Ågren 5 Dec 29, 2022
GUI for a Vocal Remover that uses Deep Neural Networks.

GUI for a Vocal Remover that uses Deep Neural Networks.

4.4k Jan 07, 2023