3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A - Continual Learning Classification Challenge

Overview

Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay

3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A - Continual Learning Classification

Technical Report slides
video

Description

Official implementation of our solution (3rd place) for ICCV 2021 Workshop Self-supervised Learning for Next-Generation Industry-level Autonomous Driving (SSLAD) Track 3A - Continual Learning Classification using "Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay".

How to run

First, install dependencies

# clone project   
git clone https://github.com/mrifkikurniawan/sslad.git

# install project   
cd sslad 
pip3 install -r requirements.txt   

Next, preparing the dataset via links below.

Next, run training.

# run training module with our proposed cl strategy
python3.9 classification.py \
--config configs/cl_strategy.yaml \
--name {path/to/log} \
--root {root/of/your/dataset} \
--num_workers {num workers} \
--gpu_id {your-gpu-id} \
--comment {any-comments} 
--store \

or see the train.sh for the example.

Experiments Results

Method Val AMCA Test AMCA
Baseline (Uncertainty Replay)* 57.517 -
+ Multi-step Lr Scheduler* 59.591 (+2.07) -
+ Soft Labels Retrospection* 59.825 (+0.23) -
+ Contrastive Learning* 60.363 (+0.53) 59.68
+ Supervised Contrastive Learning* 61.49 (+1.13) -
+ Change backbone to ResNet50-D* 62.514 (+1.02) -
+ Focal loss* 62.71 (+0.19) -
+ Cost Sensitive Cross Entropy 63.33 (+0.62) -
+ Class Balanced Focal loss* 64.01 (+1.03) 64.53 (+4.85)
+ Head Fine-tuning with Class Balanced Replay 65.291 (+1.28) 62.58 (-1.56)
+ Head Fine-tuning with Soft Labels Retrospection 66.116 (+0.83) 62.97 (+0.39)

*Applied to our final method.

File overview

classification.py: Driver code for the classification subtrack. There are a few things that can be changed here, such as the model, optimizer and loss criterion. There are several arguments that can be set to store results etc. (Run classification.py --help to get an overview, or check the file.)

class_strategy.py: Provides an empty plugin. Here, you can define your own strategy, by implementing the necessary callbacks. Helper methods and classes can be ofcourse implemented as pleased. See here for examples of strategy plugins.

data_intro.ipynb: In this notebook the stream of data is further introduced and explained. Feel free to experiment with the dataset to get a good feeling of the challenge.

Note: not all callbacks have to be implemented, you can just delete those that you don't need.

classification_util.py & haitain_classification.py: These files contain helper code for dataloading etc. There should be no reason to change these.

Owner
Rifki Kurniawan
MS student at Xi'an Jiaotong University; Artificial Intelligence Engineer at Nodeflux
Rifki Kurniawan
Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch

Lie Transformer - Pytorch (wip) Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch. Only the SE3 version will be present in thi

Phil Wang 78 Oct 26, 2022
fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

Ali Abdalla 34 Jan 05, 2023
Spatial color quantization in Rust

rscolorq Rust port of Derrick Coetzee's scolorq, based on the 1998 paper "On spatial quantization of color images" by Jan Puzicha, Markus Held, Jens K

Collyn O'Kane 37 Dec 22, 2022
Denoising Normalizing Flow

Denoising Normalizing Flow Christian Horvat and Jean-Pascal Pfister 2021 We combine Normalizing Flows (NFs) and Denoising Auto Encoder (DAE) by introd

CHrvt 17 Oct 15, 2022
PyTorch code to run synthetic experiments.

Code repository for Invariant Risk Minimization Source code for the paper: @article{InvariantRiskMinimization, title={Invariant Risk Minimization}

Facebook Research 345 Dec 12, 2022
Vision-Language Pre-training for Image Captioning and Question Answering

VLP This repo hosts the source code for our AAAI2020 work Vision-Language Pre-training (VLP). We have released the pre-trained model on Conceptual Cap

Luowei Zhou 373 Jan 03, 2023
An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow

Fast Transformer This repo implements Fastformer: Additive Attention Can Be All You Need by Wu et al. in TensorFlow. Fast Transformer is a Transformer

Rishit Dagli 139 Dec 28, 2022
An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding, top-down-bottom-up, and attention (consensus between columns)

GLOM - Pytorch (wip) An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding,

Phil Wang 173 Dec 14, 2022
DeepCAD: A Deep Generative Network for Computer-Aided Design Models

DeepCAD This repository provides source code for our paper: DeepCAD: A Deep Generative Network for Computer-Aided Design Models Rundi Wu, Chang Xiao,

Rundi Wu 85 Dec 31, 2022
Fast Scattering Transform with CuPy/PyTorch

Announcement 11/18 This package is no longer supported. We have now released kymatio: http://www.kymat.io/ , https://github.com/kymatio/kymatio which

Edouard Oyallon 289 Dec 07, 2022
Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Packt 1.5k Jan 03, 2023
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Ranking Models in Unlabeled New Environments (iccv21)

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

14 Dec 17, 2021
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer Summary Explorer is a tool to visually inspect the summaries from several state-of-the-art neural summarization models across multipl

Webis 42 Aug 14, 2022
A modification of Daniel Russell's notebook merged with Katherine Crowson's hq-skip-net changes

Edits made to this repo by Katherine Crowson I have added several features to this repository for use in creating higher quality generative art (featu

Paul Fishwick 10 May 07, 2022
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Models used for prediction Diabetes and further the basic theory and working of Gold nanoparticles.

GoldNanoparticles This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Mode

1 Jan 30, 2022
BTC-Generator - BTC Generator With Python

Что такое BTC-Generator? Это генератор чеков всеми любимого @BTC_BANKER_BOT Для

DoomGod 3 Aug 24, 2022
PyMove is a Python library to simplify queries and visualization of trajectories and other spatial-temporal data

Use PyMove and go much further Information Package Status License Python Version Platforms Build Status PyPi version PyPi Downloads Conda version Cond

Insight Data Science Lab 64 Nov 15, 2022