LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

Overview

LightningFSL: Few-Shot Learning with Pytorch-Lightning

LICENSE Python last commit

In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, including two official ones

Boosting Few-Shot Classification with View-Learnable Contrastive Learning (ICME 2021)

Rectifying the Shortcut Learning of Background for Few-Shot Learning (NeurIPS 2021)

Contents

  1. Advantages
  2. Few-shot classification Results
  3. General Guide

Advantages:

This repository is built on top of LightningCLI, which is very convenient to use after being familiar with this tool.

  1. Enabling multi-GPU training
    • Our implementation of FSL framework allows DistributedDataParallel (DDP) to be included in the training of Few-Shot Learning, which is not available before to the best of our knowledge. Previous researches use DataParallel (DP) instead, which is inefficient and requires more computation storages. We achieve this by modifying the DDP sampler of Pytorch, making it possible to sample few-shot learning tasks among devices. See dataset_and_process/samplers.py for details.
  2. High reimplemented accuracy
    • Our reimplementations of some FSL algorithms achieve strong performance. For example, our ResNet12 implementation of ProtoNet and Cosine Classifier achieves 76+ and 80+ accuracy on 5w5s task of miniImageNet, respectively. All results can be reimplemented using pre-defined configuration files in config/.
  3. Quick and convenient creation of new algorithms
    • Pytorch-lightning provides our codebase with a clean and modular structure. Built on top of LightningCLI, our codebase unifies necessary basic components of FSL, making it easy to implement a brand-new algorithm. An impletation of an algorithm usually only requires three short additional files, one specifying the lightningModule, one specifying the classifer head, and the last one specifying all configurations. For example, see the code of ProtoNet (modules/PN.py, architectures/classifier/proto_head.py) and cosine classifier (modules/cosine_classifier.py, architectures/classifier/CC_head.py.
  4. Easy reproducability
    • Every time of running results in a full copy of yaml configuration file in the logging directory, enabling exact reproducability (by using the direct yaml file instead of creating a new one).
  5. Enabling both episodic/non-episodic algorithms
    • Switching with a single parameter is_meta in the configuration file.

Implemented Few-shot classification Results

Implemented results on few-shot classification datasets. The average results of 2,000 randomly sampled episodes repeated 5 times for 1/5-shot evaluation with 95% confidence interval are reported.

miniImageNet Dataset

Models Backbone 5-way 1-shot 5-way 5-shot pretrained models
Protypical Networks ResNet12 61.19+-0.40 76.50+-0.45 link
Cosine Classifier ResNet12 63.89+-0.44 80.94+-0.05 link
Meta-Baseline ResNet12 62.65+-0.65 79.10+-0.29 link
S2M2 WRN-28-10 58.85+-0.20 81.83+-0.15 link
S2M2+Logistic_Regression WRN-28-10 62.36+-0.42 82.01+-0.24
MoCo-v2(unsupervised) ResNet12 52.03+-0.33 72.94+-0.29 link
Exemplar-v2 ResNet12 59.02+-0.24 77.23+-0.16 link
PN+CL ResNet12 63.44+-0.44 79.42+-0.06 link
COSOC ResNet12 69.28+0.49 85.16+-0.42 link

General Guide

To understand the code correctly, it is highly recommended to first quickly go through the pytorch-lightning documentation, especially LightningCLI. It won't be a long journey since pytorch-lightning is built on the top of pytorch.

Installation

Just run the command:

pip install -r requirements.txt

running an implemented few-shot model

  1. Downloading Datasets:
  2. Training (Except for Meta-baseline and COSOC):
    • Choose the corresponding configuration file in 'config'(e.g.set_config_PN.py for PN model), set inside the parameter 'is_test' to False, set GPU ids (multi-GPU or not), dataset directory, logging dir as well as other parameters you would like to change.
    • modify the first line in run.sh (e.g., python config/set_config_PN.py).
    • To begin the running, run the command bash run.sh
  3. Training Meta-baseline:
    • This is a two-stage algorithm, with the first stage being CEloss-pretraining, followed by ProtoNet finetuning. So a two-stage training is need. The first training uses the configuration file config/set_config_meta_baseline_pretrain.py. The second uses config/set_config_meta_baseline_finetune.py, with pre-training model path from the first stage, specified by the parameterpre_trained_path in the configuration file.
  4. Training COSOC:
    • For pre-training Exemplar, choose configuration file config/set_config_MoCo.py and set parameter is_exampler to True.
    • For runing COS algorithm, run the command python COS.py --save_dir [save_dir] --pretrained_Exemplar_path [model_path] --dataset_path [data_path]. [save_dir] specifies the saving directory of all foreground objects, [model_path] and [data_path] specify the pathes of pre-trained model and datasets, respectively.
    • For runing a FSL algorithm with COS, choose configuration file config/set_config_COSOC.py and set parameter data["train_dataset_params"] to the directory of saved data of COS algorithm, pre_trained_path to the directory of pre-trained Exemplar.
  5. Testing:
    • Choose the same configuration file as training, set parameter is_test to True, pre_trained_path to the directory of checkpoint model (with suffix '.ckpt'), and other parameters (e.g. shot, batchsize) as you disire.
    • modify the first line in run.sh (e.g., python config/set_config_PN.py).
    • To begin the testing, run the command bash run.sh

Creating a new few-shot algorithm

It is quite simple to implement your own algorithm. most of algorithms only need creation of a new LightningModule and a classifier head. We give a breif description of the code structure here.

run.py

It is usually not needed to modify this file. The file run.py wraps the whole training and testing procedure of a FSL algorithm, for which all configurations are specified by an individual yaml file contained in the /config folder; see config/set_config_PN.py for example. The file run.py contains a python class Few_Shot_CLI, inherited from LightningCLI. It adds new hyperpameters (Also specified in configuration file) as well as testing process for FSL.

FewShotModule

Need modification. The folder modules contains LightningModules for FSL models, specifying model components, optimizers, logging metrics and train/val/test processes. Notably, modules/base_module.py contains the template module for all FSL models. All other modules inherit the base module; see modules/PN.py and modules/cosine_classifier.py for how episodic/non-episodic models inherit from the base module.

architectures

Need modification. We divide general FSL architectures into feature extractor and classification head, specified respectively in architectures/feature_extractor and architectures/classifier. These are just common nn modules in pytorch, which shall be embedded in LightningModule mentioned above. The recommended feature extractor is ResNet12, which is popular and shows promising performance. The classification head, however, varies with algorithms and need specific designs.

Datasets and DataModule

It is usually not needed for modification. Pytorch-lightning unifies data processing across training, val and testing into a single LightningDataModule. We disign such a datamodule in dataset_and_process/datamodules/few_shot_datamodule.py for FSL, enabling episodic/non-episodic sampling and DDP for multi-GPU fast training. The definition of Dataset itself is in dataset_and_process/datasets, specified as common pytorch datasets class. There is no need to modify the dataset module unless new datasets are involved.

Callbacks and Plugins

It is usually not needed for modification. See documentation of pytorch-lightning for detailed introductions of callbacks and Plugins. They are additional functionalities added to the system in a modular fashion.

Configuration

Need modification. See LightningCLI for how a yaml configuration file works. For each algorithm, there needs one specific configuration file, though most of the configurations are the same across algorithms. Thus it is convenient to copy one configuration and change it for a new algorithm.

Owner
Xu Luo
M.S. student of SMILE Lab, UESTC
Xu Luo
Tensor-Based Quantum Machine Learning

TensorLy_Quantum TensorLy-Quantum is a Python library for Tensor-Based Quantum Machine Learning that builds on top of TensorLy and PyTorch. Website: h

TensorLy 85 Dec 03, 2022
The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal Transport Maps, ICLR 2022.

Generative Modeling with Optimal Transport Maps The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal

Litu Rout 30 Dec 22, 2022
Modified fork of Xuebin Qin's U-2-Net Repository. Used for demonstration purposes.

U^2-Net (U square net) Modified version of U2Net used for demonstation purposes. Paper: U^2-Net: Going Deeper with Nested U-Structure for Salient Obje

Shreyas Bhat Kera 13 Aug 28, 2022
中文语音识别系列,读者可以借助它快速训练属于自己的中文语音识别模型,或直接使用预训练模型测试效果。

MASR中文语音识别(pytorch版) 开箱即用 自行训练 使用与训练分离(增量训练) 识别率高 说明:因为每个人电脑机器不同,而且有些安装包安装起来比较麻烦,强烈建议直接用我编译好的docker环境跑 目前docker基础环境为ubuntu-cuda10.1-cudnn7-pytorch1.6.

发送小信号 180 Dec 17, 2022
计算机视觉中用到的注意力模块和其他即插即用模块PyTorch Implementation Collection of Attention Module and Plug&Play Module

PyTorch实现多种计算机视觉中网络设计中用到的Attention机制,还收集了一些即插即用模块。由于能力有限精力有限,可能很多模块并没有包括进来,有任何的建议或者改进,可以提交issue或者进行PR。

PJDong 599 Dec 23, 2022
This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.

GPRGNN This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network. Hidden state feature extraction i

Jianhao 92 Jan 03, 2023
People log into different sites every day to get information and browse through these sites one by one

HyperLink People log into different sites every day to get information and browse through these sites one by one. And they are exposed to advertisemen

0 Feb 17, 2022
DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data.

DWIPrep: A Robust Preprocessing Pipeline for dMRI Data DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transp

Gal Ben-Zvi 1 Jan 09, 2023
Shōgun

The SHOGUN machine learning toolbox Unified and efficient Machine Learning since 1999. Latest release: Cite Shogun: Develop branch build status: Donat

Shōgun ML 2.9k Jan 04, 2023
Official implementation of Long-Short Transformer in PyTorch.

Long-Short Transformer (Transformer-LS) This repository hosts the code and models for the paper: Long-Short Transformer: Efficient Transformers for La

NVIDIA Corporation 198 Dec 29, 2022
Point Cloud Registration Network

PCRNet: Point Cloud Registration Network using PointNet Encoding Source Code Author: Vinit Sarode and Xueqian Li Paper | Website | Video | Pytorch Imp

ViNiT SaRoDe 59 Nov 19, 2022
Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

Implementation based on Paper - Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

HamasKhan 3 Jul 08, 2022
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022
NHS AI Lab Skunkworks project: Long Stayer Risk Stratification

NHS AI Lab Skunkworks project: Long Stayer Risk Stratification A pilot project for the NHS AI Lab Skunkworks team, Long Stayer Risk Stratification use

NHSX 21 Nov 14, 2022
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

697 Jan 06, 2023
Sparse Physics-based and Interpretable Neural Networks

Sparse Physics-based and Interpretable Neural Networks for PDEs This repository contains the code and manuscript for research done on Sparse Physics-b

28 Jan 03, 2023
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

77 Dec 16, 2022