Generative vs Discriminative: Rethinking The Meta-Continual Learning (NeurIPS 2021)

Related tags

Deep LearningGeMCL
Overview




Generative vs Discriminative: Rethinking The Meta-Continual Learning (NeurIPS 2021)

In this repository we provide PyTorch implementations for GeMCL; a generative approach for meta-continual learning. The directory outline is as follows:

root
 ├── code                 # The folder containing all pytorch implementations
       ├── datasets           # The path containing Dataset classes and train/test parameters for each dataset
            ├── omnigolot
                  ├── TrainParams.py  # omniglot training parameters configuration
                  ├── TestParams.py   # omniglot testing parameters configuration

            ├── mini-imagenet
                  ├── TrainParams.py  # mini-imagenet training parameters configuration
                  ├── TestParams.py   # mini-imagenet testing parameters configuration
            ├── cifar
                  ├── TrainParams.py  # cifar 100 training parameters configuration
                  ├── TestParams.py   # cifar 100 testing parameters configuration

       ├── model              # The path containing proposed models
       ├── train.py           # The main script for training
       ├── test.py            # The main script for testing
       ├── pretrain.py        # The main script for pre-training

 ├── datasets             # The location in which datasets are placed
       ├── omniglot
       ├── miniimagenet
       ├── cifar

 ├── experiments          # The location in which accomplished experiments are stored
       ├── omniglot
       ├── miniimagenet
       ├── cifar

In the following sections we will first provide details about how to setup the dataset. Then the instructions for installing package dependencies, training and testing is provided.

Configuring the Dataset

In this paper we have used Omniglot, CIFAR-100 and Mini-Imagenet datasets. The omniglot and cifar-100 are light-weight datasets and are automatically downloaded into datasets/omniglot/ or datasets/cifar/ whenever needed. however the mini-imagenet dataset need to be manually downloaded and placed in datasets/miniimagenet/. The following instructions will show how to properly setup this dataset:

  • First download the images from this link (provided by the owners) and the train.csv,val.csv,test.csv splits from this link.

  • Extract and place the downloaded files directly under datasets/miniimagenet/. (We expect to have train.csv, val.csv, test.csv and images folder under this path)

Reading directly from the disk every time we need this dataset is an extremely slow procedure. To solve this issue we use a preprocessing step, in which the images are first shrinked to 100 pixels in the smaller dimension (without cahnging the aspect ratio), and then converted to numpy npy format. The code for this preprocessing is provided in code directory and should be executed as follows:

cd code
python genrate_img.py ../datasets/miniimagenet ../datasets/miniimagenet

Wait until the success message for test, train and validation appears and then we are ready to go.

Installing Prerequisites

The following packages are required:

  • opencv-python==4.5.1
  • torch==1.7.1+cu101
  • tensorboard==2.4.1
  • pynvml==8.0.4
  • matplotlib==3.3.2
  • tqdm==4.55.1
  • scipy==1.6.0
  • torchvision==0.8.2+cu101

Training and Testing

The first step for training or testing is to confgure the desired parameters. We have seperated the training/testing parameters for each dataset and placed them under code/datasets/omniglot and code/datasets/miniimagenet. For example to change the number of meta-training episodes on omniglot dataset, one may do as following:

  • Open code/datasets/omniglot/TrainParams.py

  • Find the line self.meta_train_steps and change it's value.

Setting the training model is done in the same way by changing self.modelClass value. We have provided the following models in the code/model/ path:

file path model name in the paper
code/model/Bayesian.py GeMCL predictive
code/model/MAP.py GeMCL MAP
code/model/LR.py MTLR
code/model/PGLR.py PGLR
code/model/ProtoNet.py Prototypical

Training Instructions

To perform training first configure the training parameters in code/datasets/omniglot/TrainParams.py or code/datasets/miniimagenet/TrainParams.py for omniglot and mini-magenet datasets respectively. In theese files, self.experiment_name variable along with a Date prefix will determine the folder name in which training logs are stored.

Now to start training run the following command for omniglot (In all our codes the M or O flag represents mini-imagene and omniglot datasets respectively):

cd code
python train.py O

and the following for mini-imagenet:

cd code
python train.py M

The training logs and checkpoints are stored in a folder under experiments/omniglot/ or experiments/miniimagenet/ with the name specified in self.experiment_name. We have already attached some trained models with the same settings reported in the paper. The path and details for these models are as follows:

Model Path Details
experiments/miniimagenet/imagenet_bayesian_final GeMCL predictive trained on mini-imagenet
experiments/miniimagenet/imagenet_map_final GeMCL MAP trained on mini-imagenet
experiments/miniimagenet/imagenet_PGLR_final PGLR trained on mini-imagenet
experiments/miniimagenet/imagenet_MTLR_final MTLR trained on mini-imagenet
experiments/miniimagenet/imagenet_protonet_final Prototypical trained on mini-imagenet
experiments/miniimagenet/imagenet_pretrain_final pretrained model on mini-imagenet
experiments/miniimagenet/imagenet_Bayesian_OMLBackbone GeMCL predictive trained on mini-imagenet with OML backbone
experiments/miniimagenet/imagenet_random random model compatible to mini-imagenet but not trained previously
experiments/omniglot/omniglot_Bayesian_final GeMCL predictive trained on omniglot
experiments/omniglot/omniglot_MAP_final GeMCL MAP trained on omniglot
experiments/omniglot/omniglot_PGLR_final PGLR trained on omniglot
experiments/omniglot/omniglot_MTLR_final MTLR trained on omniglot
experiments/omniglot/omniglot_Protonet_final Prototypical trained on omniglot
experiments/omniglot/omniglot_Pretrain_final pretrained model on omniglot
experiments/omniglot/Omniglot_Bayesian_OMLBackbone GeMCL predictive trained on omniglot with OML backbone
experiments/omniglot/omniglot_random random model compatible to omniglot but not trained previously
experiments/omniglot/omniglot_bayesian_28 GeMCL predictive trained on omniglot with 28x28 input

Testing Instructions

To evaluate a previously trained model, we can use test.py by determining the path in which the model was stored. As an example consider the following structure for omniglot experiments.

root
 ├── experiments
       ├── omniglot
            ├── omniglot_Bayesian_final

Now to test this model run:

cd code
python test.py O ../experiments/omniglot/omniglot_Bayesian_final/

At the end of testing, the mean accuracy and std among test epsiodes will be printed.

Note: Both test.py and train.py use TrainParams.py for configuring model class. Thus before executing test.py make sure that TrainParams.py is configured correctly.

Pre-training Instructions

To perform a preitraining you can use

cd code
python pretrain.py O

The pre-training configuarations are also available in TrainParams.py.

References

An example of time series augmentation methods with Keras

Time Series Augmentation This is a collection of time series data augmentation methods and an example use using Keras. News 2020/04/16: Repository Cre

九州大学 ヒューマンインタフェース研究室 229 Jan 02, 2023
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Holy Wu 44 Dec 27, 2022
CLADE - Efficient Semantic Image Synthesis via Class-Adaptive Normalization (TPAMI 2021)

Efficient Semantic Image Synthesis via Class-Adaptive Normalization (Accepted by TPAMI)

tzt 49 Nov 17, 2022
Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Fellowship Prediction GitHub Profile Comparative Analysis Tool Built with BentoML Table of Contents: Features Disclaimer Technologies Used Contributin

Damir Temir 51 Dec 29, 2022
This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters.

openmc-plasma-source This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters. The OpenMC sources a

Fusion Energy 10 Oct 18, 2022
A voice recognition assistant similar to amazon alexa, siri and google assistant.

kenyan-Siri Build an Artificial Assistant Full tutorial (video) To watch the tutorial, click on the image below Installation For windows users (run th

Alison Parker 3 Aug 19, 2022
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization

sam.pytorch A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization ( Foret+2020) Paper, Official implementa

Ryuichiro Hataya 102 Dec 28, 2022
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
Differential Privacy for Heterogeneous Federated Learning : Utility & Privacy tradeoffs

Differential Privacy for Heterogeneous Federated Learning : Utility & Privacy tradeoffs In this work, we propose an algorithm DP-SCAFFOLD(-warm), whic

19 Nov 10, 2022
Official Repo of my work for SREC Nandyal Machine Learning Bootcamp

About the Bootcamp A 3-day Machine Learning Bootcamp organised by Department of Electronics and Communication Engineering, Santhiram Engineering Colle

MS 1 Nov 29, 2021
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
Keras Image Embeddings using Contrastive Loss

Keras-Image-Embeddings-using-Contrastive-Loss Image to Embedding projection in vector space. Implementation in keras and tensorflow for custom data. B

Shravan Anand K 5 Mar 21, 2022
Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Dominik Klein 189 Dec 21, 2022
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
DumpSMBShare - A script to dump files and folders remotely from a Windows SMB share

DumpSMBShare A script to dump files and folders remotely from a Windows SMB shar

Podalirius 178 Jan 06, 2023
Code for our TKDE paper "Understanding WeChat User Preferences and “Wow” Diffusion"

wechat-wow-analysis Understanding WeChat User Preferences and “Wow” Diffusion. Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang,

18 Sep 16, 2022
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

Lens by Credo AI - Responsible AI Assessment Framework Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data a

Credo AI 27 Dec 14, 2022
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Context Encoders: Feature Learning by Inpainting CVPR 2016 [Project Website] [Imagenet Results] Sample results on held-out images: This is the trainin

Deepak Pathak 829 Dec 31, 2022
Backdoor Attack through Frequency Domain

Backdoor Attack through Frequency Domain DEPENDENCIES python==3.8.3 numpy==1.19.4 tensorflow==2.4.0 opencv==4.5.1 idx2numpy==1.2.3 pytorch==1.7.0 Data

5 Jun 18, 2022
Sound and Cost-effective Fuzzing of Stripped Binaries by Incremental and Stochastic Rewriting

StochFuzz: A New Solution for Binary-only Fuzzing StochFuzz is a (probabilistically) sound and cost-effective fuzzing technique for stripped binaries.

Zhuo Zhang 164 Dec 05, 2022