PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"

Related tags

Deep LearningREMIND
Overview

REMIND Your Neural Network to Prevent Catastrophic Forgetting

This is a PyTorch implementation of the REMIND algorithm from our ECCV-2020 paper. An arXiv pre-print of our paper is available.

REMIND (REplay using Memory INDexing) is a novel brain-inspired streaming learning model that uses tensor quantization to efficiently store hidden representations (e.g., CNN feature maps) for later replay. REMIND implements this compression using Product Quantization (PQ) and outperforms existing models on the ImageNet and CORe50 classification datasets. Further, we demonstrate REMIND's robustness by pioneering streaming Visual Question Answering (VQA), in which an agent must answer questions about images.

Formally, REMIND takes an input image and passes it through frozen layers of a network to obtain tensor representations (feature maps). It then quantizes the tensors via PQ and stores the indices in memory for replay. The decoder reconstructs a previous subset of tensors from stored indices to train the plastic layers of the network before inference. We restrict the size of REMIND's replay buffer and use a uniform random storage policy.

REMIND

Dependencies

⚠️ ⚠️ For unknown reasons, our code does not reproduce results in PyTorch versions greater than PyTorch 1.3.1. Please follow our instructions below to ensure reproducibility.

We have tested the code with the following packages and versions:

  • Python 3.7.6
  • PyTorch (GPU) 1.3.1
  • torchvision 0.4.2
  • NumPy 1.18.5
  • FAISS (CPU) 1.5.2
  • CUDA 10.1 (also works with CUDA 10.0)
  • Scikit-Learn 0.23.1
  • Scipy 1.1.0
  • NVIDIA GPU

We recommend setting up a conda environment with these same package versions:

conda create -n remind_proj python=3.7
conda activate remind_proj
conda install numpy=1.18.5
conda install pytorch=1.3.1 torchvision=0.4.2 cudatoolkit=10.1 -c pytorch
conda install faiss-cpu=1.5.2 -c pytorch

Setup ImageNet-2012

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset has 1000 categories and 1.2 million images. The images do not need to be preprocessed or packaged in any database, but the validation images need to be moved into appropriate subfolders. See link.

  1. Download the images from http://image-net.org/download-images

  2. Extract the training data:

    mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
    tar -xvf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
    find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done
    cd ..
  3. Extract the validation data and move images to subfolders:

    mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar
    wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash

Repo Structure & Descriptions

Training REMIND on ImageNet (Classification)

We have provided the necessary files to train REMIND on the exact same ImageNet ordering used in our paper (provided in imagenet_class_order.txt). We also provide steps for running REMIND on an alternative ordering.

To train REMIND on the ImageNet ordering from our paper, follow the steps below:

  1. Run run_imagenet_experiment.sh to train REMIND on the ordering from our paper. Note, this will use our ordering and associated files provided in imagenet_files.

To train REMIND on a different ImageNet ordering, follow the steps below:

  1. Generate a text file containing one class name per line in the desired order.
  2. Run make_numpy_imagenet_label_files.py to generate the necessary numpy files for the desired ordering using the text file from step 1.
  3. Run train_base_init_network.sh to train an offline model using the desired ordering and label files generated in step 2 on the base init data.
  4. Run run_imagenet_experiment.sh using the label files from step 2 and the ckpt file from step 3 to train REMIND on the desired ordering.

Files generated from the streaming experiment:

  • *.json files containing incremental top-1 and top-5 accuracies
  • *.pth files containing incremental model predictions/probabilities
  • *.pth files containing incremental REMIND classifier (F) weights
  • *.pkl files containing PQ centroids and incremental buffer data (e.g., latent codes)

To continue training REMIND from a previous ckpt:

We save out incremental weights and associated data for REMIND after each evaluation cycle. This enables REMIND to continue training from these saved files (in case of a computer crash etc.). This can be done as follows in run_imagenet_experiment.sh:

  1. Set the --resume_full_path argument to the path where the previous REMIND model was saved.
  2. Set the --streaming_min_class argument to the class REMIND left off on.
  3. Run run_imagenet_experiment.sh

Training REMIND on VQA Datasets

We use the gensen library for question features. Execute the following steps to set it up:

cd ${GENSENPATH} 
git clone [email protected]:erobic/gensen.git
cd ${GENSENPATH}/data/embedding
chmod +x glove25.sh && ./glove2h5.sh    
cd ${GENSENPATH}/data/models
chmod +x download_models.sh && ./download_models.sh

Training REMIND on CLEVR

Note: For convenience, we pre-extract all the features including the PQ encoded features. This requires 140 GB of free space, assuming images are deleted after feature extraction.

  1. Download and extract CLEVR images+annotations:

    wget https://dl.fbaipublicfiles.com/clevr/CLEVR_v1.0.zip
    unzip CLEVR_v1.0.zip
  2. Extract question features

    • Clone the gensen repository and download glove features:
    cd ${GENSENPATH} 
    git clone [email protected]:erobic/gensen.git
    cd ${GENSENPATH}/data/embedding
    chmod +x glove25.sh && ./glove2h5.sh    
    cd ${GENSENPATH}/data/models
    chmod +x download_models.sh && ./download_models.sh
    
    • Edit vqa_experiments/clevr/extract_question_features_clevr.py, changing the DATA_PATH variable to point to CLEVR dataset and GENSEN_PATH to point to gensen repository and extract features: python vqa_experiments/clevr/extract_question_features_clevr.py

    • Pre-process the CLEVR questions Edit $PATH variable in vqa_experiments/clevr/preprocess_clevr.py file, pointing it to the directory where CLEVR was extracted

  3. Extract image features, train PQ encoder and extract encoded features

    • Extract image features: python -u vqa_experiments/clevr/extract_image_features_clevr.py --path /path/to/CLEVR
    • In pq_encoding_clevr.py, change the value of PATH and streaming_type (as either 'iid' or 'qtype')
    • Train PQ encoder and extract features: python vqa_experiments/clevr/pq_encoding_clevr.py
  4. Train REMIND

    • Edit data_path in vqa_experiments/configs/config_CLEVR_streaming.py
    • Run ./vqa_experiments/run_clevr_experiment.sh (Set DATA_ORDER to either qtype or iid to define the data order)

Training REMIND on TDIUC

Note: For convenience, we pre-extract all the features including the PQ encoded features. This requires around 170 GB of free space, assuming images are deleted after feature extraction.

  1. Download TDIUC

    cd ${TDIUC_PATH}
    wget https://kushalkafle.com/data/TDIUC.zip && unzip TDIUC.zip
    cd TDIUC && python setup.py --download Y # You may need to change print '' statements to print('')
    
  2. Extract question features

    • Edit vqa_experiments/clevr/extract_question_features_tdiuc.py, changing the DATA_PATH variable to point to TDIUC dataset and GENSEN_PATH to point to gensen repository and extract features: python vqa_experiments/tdiuc/extract_question_features_tdiuc.py

    • Pre-process the TDIUC questions Edit $PATH variable in vqa_experiments/clevr/preprocess_tdiuc.py file, pointing it to the directory where TDIUC was extracted

  3. Extract image features, train PQ encoder and extract encoded features

    • Extract image features: python -u vqa_experiments/tdiuc/extract_image_features_tdiuc.py --path /path/to/TDIUC
    • In pq_encoding_tdiuc.py, change the value of PATH and streaming_type (as either 'iid' or 'qtype')
    • Train PQ encoder and extract features: python vqa_experiments/clevr/pq_encoding_clevr.py
  4. Train REMIND

    • Edit data_path in vqa_experiments/configs/config_TDIUC_streaming.py
    • Run ./vqa_experiments/run_tdiuc_experiment.sh (Set DATA_ORDER to either qtype or iid to define the data order)

Citation

If using this code, please cite our paper.

@inproceedings{hayes2020remind,
  title={REMIND Your Neural Network to Prevent Catastrophic Forgetting},
  author={Hayes, Tyler L and Kafle, Kushal and Shrestha, Robik and Acharya, Manoj and Kanan, Christopher},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2020}
}
Owner
Tyler Hayes
I am a PhD candidate at the Rochester Institute of Technology (RIT). My current research is on lifelong machine learning.
Tyler Hayes
R interface to fast.ai

R interface to fastai The fastai package provides R wrappers to fastai. The fastai library simplifies training fast and accurate neural nets using mod

113 Dec 20, 2022
A benchmark for the task of translation suggestion

WeTS: A Benchmark for Translation Suggestion Translation Suggestion (TS), which provides alternatives for specific words or phrases given the entire d

zhyang 55 Dec 24, 2022
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
Predictive Maintenance LSTM

Predictive-Maintenance-LSTM - Predictive maintenance study for Complex case study, we've obtained failure causes by operational error and more deeply by design mistakes.

Amir M. Sadafi 1 Dec 31, 2021
PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

Study-CSRNet-pytorch This is the PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

0 Mar 01, 2022
GoodNews Everyone! Context driven entity aware captioning for news images

This is the code for a CVPR 2019 paper, called GoodNews Everyone! Context driven entity aware captioning for news images. Enjoy! Model preview: Huge T

117 Dec 19, 2022
Simple Pixelbot for Diablo 2 Resurrected written in python and opencv.

Simple Pixelbot for Diablo 2 Resurrected written in python and opencv. Obviously only use it in offline mode as it is against the TOS of Blizzard to use it in online mode!

468 Jan 03, 2023
A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

Evan 1.3k Jan 02, 2023
PyContinual (An Easy and Extendible Framework for Continual Learning)

PyContinual (An Easy and Extendible Framework for Continual Learning) Easy to Use You can sumply change the baseline, backbone and task, and then read

176 Jan 05, 2023
An implementation of the [Hierarchical (Sig-Wasserstein) GAN] algorithm for large dimensional Time Series Generation

Hierarchical GAN for large dimensional financial market data Implementation This repository is an implementation of the [Hierarchical (Sig-Wasserstein

11 Nov 29, 2022
Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection

Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection abstract:Unlike 2D object detection where all RoI featur

DK. Zhang 2 Oct 07, 2022
This is a simple plugin for Vim that allows you to use OpenAI Codex.

🤖 Vim Codex An AI plugin that does the work for you. This is a simple plugin for Vim that will allow you to use OpenAI Codex. To use this plugin you

Tom Dörr 195 Dec 28, 2022
Learning Neural Network Subspaces

Learning Neural Network Subspaces Welcome to the codebase for Learning Neural Network Subspaces by Mitchell Wortsman, Maxwell Horton, Carlos Guestrin,

Apple 117 Nov 17, 2022
Official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.

GLIDE This is the official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing w

OpenAI 2.9k Jan 04, 2023
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022
Lightwood is Legos for Machine Learning.

Lightwood is like Legos for Machine Learning. A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glu

MindsDB Inc 312 Jan 08, 2023
Trading environnement for RL agents, backtesting and training.

TradzQAI Trading environnement for RL agents, backtesting and training. Live session with coinbasepro-python is finaly arrived ! Available sessions: L

Tony Denion 164 Oct 30, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
(EI 2022) Controllable Confidence-Based Image Denoising

Image Denoising with Control over Deep Network Hallucination Paper and arXiv preprint -- Our frequency-domain insights derive from SFM and the concept

Images and Visual Representation Laboratory (IVRL) at EPFL 5 Dec 18, 2022