PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"

Related tags

Deep LearningREMIND
Overview

REMIND Your Neural Network to Prevent Catastrophic Forgetting

This is a PyTorch implementation of the REMIND algorithm from our ECCV-2020 paper. An arXiv pre-print of our paper is available.

REMIND (REplay using Memory INDexing) is a novel brain-inspired streaming learning model that uses tensor quantization to efficiently store hidden representations (e.g., CNN feature maps) for later replay. REMIND implements this compression using Product Quantization (PQ) and outperforms existing models on the ImageNet and CORe50 classification datasets. Further, we demonstrate REMIND's robustness by pioneering streaming Visual Question Answering (VQA), in which an agent must answer questions about images.

Formally, REMIND takes an input image and passes it through frozen layers of a network to obtain tensor representations (feature maps). It then quantizes the tensors via PQ and stores the indices in memory for replay. The decoder reconstructs a previous subset of tensors from stored indices to train the plastic layers of the network before inference. We restrict the size of REMIND's replay buffer and use a uniform random storage policy.

REMIND

Dependencies

⚠️ ⚠️ For unknown reasons, our code does not reproduce results in PyTorch versions greater than PyTorch 1.3.1. Please follow our instructions below to ensure reproducibility.

We have tested the code with the following packages and versions:

  • Python 3.7.6
  • PyTorch (GPU) 1.3.1
  • torchvision 0.4.2
  • NumPy 1.18.5
  • FAISS (CPU) 1.5.2
  • CUDA 10.1 (also works with CUDA 10.0)
  • Scikit-Learn 0.23.1
  • Scipy 1.1.0
  • NVIDIA GPU

We recommend setting up a conda environment with these same package versions:

conda create -n remind_proj python=3.7
conda activate remind_proj
conda install numpy=1.18.5
conda install pytorch=1.3.1 torchvision=0.4.2 cudatoolkit=10.1 -c pytorch
conda install faiss-cpu=1.5.2 -c pytorch

Setup ImageNet-2012

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset has 1000 categories and 1.2 million images. The images do not need to be preprocessed or packaged in any database, but the validation images need to be moved into appropriate subfolders. See link.

  1. Download the images from http://image-net.org/download-images

  2. Extract the training data:

    mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
    tar -xvf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
    find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done
    cd ..
  3. Extract the validation data and move images to subfolders:

    mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar
    wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash

Repo Structure & Descriptions

Training REMIND on ImageNet (Classification)

We have provided the necessary files to train REMIND on the exact same ImageNet ordering used in our paper (provided in imagenet_class_order.txt). We also provide steps for running REMIND on an alternative ordering.

To train REMIND on the ImageNet ordering from our paper, follow the steps below:

  1. Run run_imagenet_experiment.sh to train REMIND on the ordering from our paper. Note, this will use our ordering and associated files provided in imagenet_files.

To train REMIND on a different ImageNet ordering, follow the steps below:

  1. Generate a text file containing one class name per line in the desired order.
  2. Run make_numpy_imagenet_label_files.py to generate the necessary numpy files for the desired ordering using the text file from step 1.
  3. Run train_base_init_network.sh to train an offline model using the desired ordering and label files generated in step 2 on the base init data.
  4. Run run_imagenet_experiment.sh using the label files from step 2 and the ckpt file from step 3 to train REMIND on the desired ordering.

Files generated from the streaming experiment:

  • *.json files containing incremental top-1 and top-5 accuracies
  • *.pth files containing incremental model predictions/probabilities
  • *.pth files containing incremental REMIND classifier (F) weights
  • *.pkl files containing PQ centroids and incremental buffer data (e.g., latent codes)

To continue training REMIND from a previous ckpt:

We save out incremental weights and associated data for REMIND after each evaluation cycle. This enables REMIND to continue training from these saved files (in case of a computer crash etc.). This can be done as follows in run_imagenet_experiment.sh:

  1. Set the --resume_full_path argument to the path where the previous REMIND model was saved.
  2. Set the --streaming_min_class argument to the class REMIND left off on.
  3. Run run_imagenet_experiment.sh

Training REMIND on VQA Datasets

We use the gensen library for question features. Execute the following steps to set it up:

cd ${GENSENPATH} 
git clone [email protected]:erobic/gensen.git
cd ${GENSENPATH}/data/embedding
chmod +x glove25.sh && ./glove2h5.sh    
cd ${GENSENPATH}/data/models
chmod +x download_models.sh && ./download_models.sh

Training REMIND on CLEVR

Note: For convenience, we pre-extract all the features including the PQ encoded features. This requires 140 GB of free space, assuming images are deleted after feature extraction.

  1. Download and extract CLEVR images+annotations:

    wget https://dl.fbaipublicfiles.com/clevr/CLEVR_v1.0.zip
    unzip CLEVR_v1.0.zip
  2. Extract question features

    • Clone the gensen repository and download glove features:
    cd ${GENSENPATH} 
    git clone [email protected]:erobic/gensen.git
    cd ${GENSENPATH}/data/embedding
    chmod +x glove25.sh && ./glove2h5.sh    
    cd ${GENSENPATH}/data/models
    chmod +x download_models.sh && ./download_models.sh
    
    • Edit vqa_experiments/clevr/extract_question_features_clevr.py, changing the DATA_PATH variable to point to CLEVR dataset and GENSEN_PATH to point to gensen repository and extract features: python vqa_experiments/clevr/extract_question_features_clevr.py

    • Pre-process the CLEVR questions Edit $PATH variable in vqa_experiments/clevr/preprocess_clevr.py file, pointing it to the directory where CLEVR was extracted

  3. Extract image features, train PQ encoder and extract encoded features

    • Extract image features: python -u vqa_experiments/clevr/extract_image_features_clevr.py --path /path/to/CLEVR
    • In pq_encoding_clevr.py, change the value of PATH and streaming_type (as either 'iid' or 'qtype')
    • Train PQ encoder and extract features: python vqa_experiments/clevr/pq_encoding_clevr.py
  4. Train REMIND

    • Edit data_path in vqa_experiments/configs/config_CLEVR_streaming.py
    • Run ./vqa_experiments/run_clevr_experiment.sh (Set DATA_ORDER to either qtype or iid to define the data order)

Training REMIND on TDIUC

Note: For convenience, we pre-extract all the features including the PQ encoded features. This requires around 170 GB of free space, assuming images are deleted after feature extraction.

  1. Download TDIUC

    cd ${TDIUC_PATH}
    wget https://kushalkafle.com/data/TDIUC.zip && unzip TDIUC.zip
    cd TDIUC && python setup.py --download Y # You may need to change print '' statements to print('')
    
  2. Extract question features

    • Edit vqa_experiments/clevr/extract_question_features_tdiuc.py, changing the DATA_PATH variable to point to TDIUC dataset and GENSEN_PATH to point to gensen repository and extract features: python vqa_experiments/tdiuc/extract_question_features_tdiuc.py

    • Pre-process the TDIUC questions Edit $PATH variable in vqa_experiments/clevr/preprocess_tdiuc.py file, pointing it to the directory where TDIUC was extracted

  3. Extract image features, train PQ encoder and extract encoded features

    • Extract image features: python -u vqa_experiments/tdiuc/extract_image_features_tdiuc.py --path /path/to/TDIUC
    • In pq_encoding_tdiuc.py, change the value of PATH and streaming_type (as either 'iid' or 'qtype')
    • Train PQ encoder and extract features: python vqa_experiments/clevr/pq_encoding_clevr.py
  4. Train REMIND

    • Edit data_path in vqa_experiments/configs/config_TDIUC_streaming.py
    • Run ./vqa_experiments/run_tdiuc_experiment.sh (Set DATA_ORDER to either qtype or iid to define the data order)

Citation

If using this code, please cite our paper.

@inproceedings{hayes2020remind,
  title={REMIND Your Neural Network to Prevent Catastrophic Forgetting},
  author={Hayes, Tyler L and Kafle, Kushal and Shrestha, Robik and Acharya, Manoj and Kanan, Christopher},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2020}
}
Owner
Tyler Hayes
I am a PhD candidate at the Rochester Institute of Technology (RIT). My current research is on lifelong machine learning.
Tyler Hayes
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 06, 2022
Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021)

T2Net Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021) [Paper][Code] Dependencies numpy==1.18.5 scikit_image==

64 Nov 23, 2022
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation Prerequisites This repo is built upon a local copy of transfo

Jixuan Wang 10 Sep 28, 2022
Spline is a tool that is capable of running locally as well as part of well known pipelines like Jenkins (Jenkinsfile), Travis CI (.travis.yml) or similar ones.

Welcome to spline - the pipeline tool Important note: Since change in my job I didn't had the chance to continue on this project. My main new project

Thomas Lehmann 29 Aug 22, 2022
patchmatch和patchmatchstereo算法的python实现

patchmatch patchmatch以及patchmatchstereo算法的python版实现 patchmatch参考 github patchmatchstereo参考李迎松博士的c++版代码 由于patchmatchstereo没有做任何优化,并且是python的代码,主要是方便解析算

Sanders Bao 11 Dec 02, 2022
Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US simulation

AutomaticUSnavigation Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US

Cesare Magnetti 6 Dec 05, 2022
PyTorch Implementation for Fracture Detection in Wrist Bone X-ray Images

wrist-d PyTorch Implementation for Fracture Detection in Wrist Bone X-ray Images note: Paper: Under Review at MPDI Diagnostics Submission Date: Novemb

Fatih UYSAL 5 Oct 12, 2022
Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time

Semi Hand-Object Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time (CVPR 2021).

96 Dec 27, 2022
Measuring if attention is explanation with ROAR

NLP ROAR Interpretability Official code for: Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Toke

Andreas Madsen 19 Nov 13, 2022
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

49 Dec 01, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Alex Nichol 62 Dec 20, 2022
Medical-Image-Triage-and-Classification-System-Based-on-COVID-19-CT-and-X-ray-Scan-Dataset

Medical-Image-Triage-and-Classification-System-Based-on-COVID-19-CT-and-X-ray-Sc

2 Dec 26, 2021
For the paper entitled ''A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining''

Summary This is the source code for the paper "A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining", which was accepted as fu

1 Nov 10, 2021
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 96 Dec 07, 2022
Pytorch implementation of Integrating Tree Path in Transformer for Code Representation

This is an official Pytorch implementation of the approaches proposed in: Han Peng, Ge Li, Wenhan Wang, Yunfei Zhao, Zhi Jin “Integrating Tree Path in

Han Peng 16 Dec 23, 2022
PyTorch code of my WACV 2022 paper Improving Model Generalization by Agreement of Learned Representations from Data Augmentation

Improving Model Generalization by Agreement of Learned Representations from Data Augmentation (WACV 2022) Paper ArXiv Why it matters? When data augmen

Rowel Atienza 5 Mar 04, 2022
Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC)

ppg-vc Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC) This repo implements different kinds of PPG-based VC models. Pretrained models. More m

Liu Songxiang 227 Dec 28, 2022
This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

9 Sep 01, 2022
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-train

GMUM 90 Jan 08, 2023