[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation

Overview

RCIL

[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation
Chang-Bin Zhang1, Jia-Wen Xiao1, Xialei Liu1, Ying-Cong Chen2, Ming-Ming Cheng1
1 College of Computer Science, Nankai University
2 The Hong Kong University of Science and Technology

Conference Paper

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

Method

截屏2022-04-09 上午1 02 44

Update

  • Coming Soon add data folder
  • Coming Soon init code for Classification
  • Coming Soon add training scripts for ADE20K and cityscapes
  • 09/04/2022 init code for segmentation
  • 09/04/2022 init readme

Benchmark and Setting

There are two commonly used settings, disjoint and overlapped. In the disjoint setting, assuming we know all classes in the future, the images in the current training step do not contain any classes in the future. The overlapped setting allows potential classes in the future to appear in the current training images. We call each training on the newly added dataset as a step. Formally, X-Y denotes the continual setting in our experiments, where X denotes the number of classes that we need to train in the first step. In each subsequent learning step, the newly added dataset contains Y classes.

There are some settings reported in our paper. You can also try it on other any custom settings.

  • Continual Class Segmentation:

    1. PASCAL VOC 2012 dataset:
      • 15-5 overlapped
      • 15-5 disjoint
      • 15-1 overlapped
      • 15-1 disjoint
      • 10-1 overlapped
      • 10-1 disjoint
    2. ADE20K dataset:
      • 100-50 overlapped
      • 100-10 overlapped
      • 50-50 overlapped
      • 100-5 overlapped
  • Continual Domain Segmentation:

    1. Cityscapes:
      • 11-5
      • 11-1
      • 1-1
  • Extension Experiments on Continual Classification

    1. ImageNet-100
      • 50-10

Performance

  • Continual Class Segmentation on PASCAL VOC 2012
Method Pub. 15-5 disjoint 15-5 overlapped 15-1 disjoint 15-1 overlapped 10-1 disjoint 10-1 overlapped
LWF TPAMI 2017 54.9 55.0 5.3 5.5 4.3 4.8
ILT ICCVW 2019 58.9 61.3 7.9 9.2 5.4 5.5
MiB CVPR 2020 65.9 70.0 39.9 32.2 6.9 20.1
SDR CVPR 2021 67.3 70.1 48.7 39.5 14.3 25.1
PLOP CVPR 2021 64.3 70.1 46.5 54.6 8.4 30.5
Ours CVPR 2022 67.3 72.4 54.7 59.4 18.2 34.3
  • Continual Class Segmentation on ADE20K
Method Pub. 100-50 overlapped 100-10 overlapped 50-50 overlapped 100-5 overlapped
ILT ICCVW 2019 17.0 1.1 9.7 0.5
MiB CVPR 2020 32.8 29.2 29.3 25.9
PLOP CVPR 2021 32.9 31.6 30.4 28.7
Ours CVPR 2022 34.5 32.1 32.5 29.6
  • Continual Domain Segmentation on Cityscapes
Method Pub. 11-5 11-1 1-1
LWF TPAMI 2017 59.7 57.3 33.0
LWF-MC CVPR 2017 58.7 57.0 31.4
ILT ICCVW 2019 59.1 57.8 30.1
MiB CVPR 2020 61.5 60.0 42.2
PLOP CVPR 2021 63.5 62.1 45.2
Ours CVPR 2022 64.3 63.0 48.9

Dataset Prepare

  • PASCVAL VOC 2012
    sh data/download_voc.sh
  • ADE20K
    sh data/download_ade.sh
  • Cityscapes
    sh data/download_cityscapes.sh

Environment

  1. conda install --yes --file requirements.txt
  2. Install inplace-abn

Training

  1. Dowload pretrained model from ResNet-101_iabn to pretrained/
  2. We have prepared some training scripts in scripts/. You can train the model by
sh scripts/voc/rcil_10-1-overlap.sh

Inference

You can simply modify the bash file by add --test, like

CUDA_VISIBLE_DEVICES=${GPU} python3 -m torch.distributed.launch --master_port ${PORT} --nproc_per_node=${NB_GPU} run.py --data xxx ... --test

Reference

If this work is useful for you, please cite us by:

@inproceedings{zhangCvpr22ContinuSSeg,
  title={Representation Compensation Networks for Continual Semantic Segmentation},
  author={Chang-Bin Zhang and Jiawen Xiao and Xialei Liu and Yingcong Chen and Ming-Ming Cheng},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

Connect

If you have any questions about this work, please feel easy to connect with us (zhangchbin ^ gmail.com).

Thanks

This code is heavily borrowed from [MiB] and [PLOP].

Awesome Continual Segmentation

There is a collection of AWESOME things about continual semantic segmentation, including papers, code, demos, etc. Feel free to pull request and star.

2022

  • Representation Compensation Networks for Continual Semantic Segmentation [CVPR 2022] [PyTorch]
  • Self-training for Class-incremental Semantic Segmentation [TNNLS 2022] [PyTorch]
  • Uncertainty-aware Contrastive Distillation for Incremental Semantic Segmentation [TPAMI 2022] [[PyTorch]]

2021

  • PLOP: Learning without Forgetting for Continual Semantic Segmentation [CVPR 2021] [PyTorch]
  • Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations [CVPR2021] [PyTorch]
  • An EM Framework for Online Incremental Learning of Semantic Segmentation [ACM MM 2021] [PyTorch]
  • SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning [NeurIPS 2021] [PyTorch]

2020

2019

You might also like...
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Siamese-nn-semantic-text-similarity - A repository containing comprehensive Neural Networks based PyTorch implementations for the semantic text similarity task
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)
PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)

PSTR (CVPR2022) This code is an official implementation of "PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)". End-to-end one-step

CVPR2022 paper
CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"

[CVPR2022] DSL: Dense Learning based Semi-Supervised Object Detection DSL is the first work on Anchor-Free detector for Semi-Supervised Object Detecti

[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos
[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos Created by Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie

The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Official code for
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Comments
  • Reproduce ADE20k

    Reproduce ADE20k

    Hi, thanks for sharing the code.

    I'm trying to reproduce the results for 100-50 ADE20k. Here are the hyper-parameters I used: --pod local --pod_factor 0.001 --pod_logits --classif_adaptive_factor --init_balanced --unce --unkd

    I get the all-mIoU=29.4%, which is much lower than the reported mIoU (34.5%). Could you please share with me the parameters you used to get the reported mIoU?

    opened by HieuPhan33 10
  • 15-1 Pascal-VOC Reproduce

    15-1 Pascal-VOC Reproduce

    Hi, I couldn't reproduce the results for 15-1 Pascal-VOC. I'm running the script voc/plop_15-1-overlap.sh. Since I have two GPUs with 24GB, I adjust the batch size to 12 and trained on 2 GPUs. This ensures the total batch size is 24 like your settings.

    Here are the results | | 0-15 | 16-20 | all | | ---- | ---- | --- | ---- | | Reproduce | 63.41 | 19.25 | 52.90 | | Reported | 70.60 | 23.70 | 59.40 |

    The results are far lower than the results reported in the paper. Could you please advise?

    opened by HieuPhan33 6
  • Reproduced results lower than the reported ones

    Reproduced results lower than the reported ones

    Hi, I directly ran the released codes without any modification. However, I found that the obtained results are lower than the reported ones by >1 percent point, especially the 10-1 setting with a large gap on the base (0-10) classes.

    Relevant log files are provided for your reference. Could you advise the possible reasons that may cause such a problem? Thanks a lot.

    | | 15-5 | | | 15-1 | | | 10-1 | | | |------------|------|-------|------|------|-------|------|------|-------|------| | | 0-15 | 16-20 | all | 0-15 | 16-20 | all | 0-10 | 11-20 | all | | Reported | 78.8 | 52.0 | 72.4 | 70.6 | 23.7 | 59.4 | 55.4 | 15.1 | 34.3 | | Reproduced | 76.7 | 48.4 | 70.0 | 69.0 | 20.5 | 57.4 | 38.0 | 13.4 | 26.3 |

    opened by Ze-Yang 3
  • Full results on Cityscapes

    Full results on Cityscapes

    Nice work! Could you publish the scripts and the corresponding results on Cityscapes? I failed to reproduce the experimental results reported in the paper. I set the batch size as 24. The initial learning rate is 0.02 for the first training step and 0.001 for the next continual learning steps. I train the model for each step with 50 epochs as the paper suggested.

    opened by XiaorongLi-95 4
Owner
Chang-Bin Zhang
Master student at Nankai University.
Chang-Bin Zhang
Evaluation suite for large-scale language models.

This repo contains code for running the evaluations and reproducing the results from the Jurassic-1 Technical Paper (see blog post), with current support for running the tasks through both the AI21 S

71 Dec 17, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
Source code for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning Official implementation of ACC, described in the paper "Adaptively Calibrated C

3 Sep 16, 2022
Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Gopalika Sharma 1 Nov 08, 2021
GyroSPD: Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices

GyroSPD Code for the paper "Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices" accepted at NeurIPS 2021. Re

Federico Lopez 12 Dec 12, 2022
One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing".

Introduction One implementation of the paper "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing". Users

seq-to-mind 18 Dec 11, 2022
Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Phil Wang 383 Jan 02, 2023
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

ARES This repository contains the code for ARES (Adversarial Robustness Evaluation for Safety), a Python library for adversarial machine learning rese

Tsinghua Machine Learning Group 377 Dec 20, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
Non-Official Pytorch implementation of "Face Identity Disentanglement via Latent Space Mapping" https://arxiv.org/abs/2005.07728 Using StyleGAN2 instead of StyleGAN

Face Identity Disentanglement via Latent Space Mapping - Implement in pytorch with StyleGAN 2 Description Pytorch implementation of the paper Face Ide

Daniel Roich 58 Dec 24, 2022
blind SQLIpy sebuah alat injeksi sql yang menggunakan waktu sql untuk mendapatkan sebuah server database.

blind SQLIpy Alat blind SQLIpy ini merupakan alat injeksi sql yang menggunakan metode time based blind sql injection metode tersebut membutuhkan waktu

Galih Anggoro Prasetya 4 Feb 24, 2022
SpinalNet: Deep Neural Network with Gradual Input

SpinalNet: Deep Neural Network with Gradual Input This repository contains scripts for training different variations of the SpinalNet and its counterp

H M Dipu Kabir 142 Dec 30, 2022
Reliable probability face embeddings

ProbFace, arxiv This is a demo code of training and testing [ProbFace] using Tensorflow. ProbFace is a reliable Probabilistic Face Embeddging (PFE) me

Kaen Chan 34 Dec 31, 2022
Sibur challange 2021 competition - 6 place

sibur challange 2021 Решение на 6 место: https://sibur.ai-community.com/competitions/5/tasks/13 Скор 1.4066/1.4159 public/private. Архитектура - однос

Ivan 5 Jan 11, 2022
This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-grained Classification".

HA-in-Fine-Grained-Classification This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-g

16 Oct 29, 2022
Turi Create simplifies the development of custom machine learning models.

Quick Links: Installation | Documentation | WWDC 2019 | WWDC 2018 Turi Create Check out our talks at WWDC 2019 and at WWDC 2018! Turi Create simplifie

Apple 10.9k Jan 01, 2023
This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Koltun"

Learning to propose objects This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Ko

Philipp Krähenbühl 90 Sep 10, 2021
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 25 Sep 06, 2022
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022