[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation

Overview

RCIL

[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation
Chang-Bin Zhang1, Jia-Wen Xiao1, Xialei Liu1, Ying-Cong Chen2, Ming-Ming Cheng1
1 College of Computer Science, Nankai University
2 The Hong Kong University of Science and Technology

Conference Paper

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

Method

截屏2022-04-09 上午1 02 44

Update

  • Coming Soon add data folder
  • Coming Soon init code for Classification
  • Coming Soon add training scripts for ADE20K and cityscapes
  • 09/04/2022 init code for segmentation
  • 09/04/2022 init readme

Benchmark and Setting

There are two commonly used settings, disjoint and overlapped. In the disjoint setting, assuming we know all classes in the future, the images in the current training step do not contain any classes in the future. The overlapped setting allows potential classes in the future to appear in the current training images. We call each training on the newly added dataset as a step. Formally, X-Y denotes the continual setting in our experiments, where X denotes the number of classes that we need to train in the first step. In each subsequent learning step, the newly added dataset contains Y classes.

There are some settings reported in our paper. You can also try it on other any custom settings.

  • Continual Class Segmentation:

    1. PASCAL VOC 2012 dataset:
      • 15-5 overlapped
      • 15-5 disjoint
      • 15-1 overlapped
      • 15-1 disjoint
      • 10-1 overlapped
      • 10-1 disjoint
    2. ADE20K dataset:
      • 100-50 overlapped
      • 100-10 overlapped
      • 50-50 overlapped
      • 100-5 overlapped
  • Continual Domain Segmentation:

    1. Cityscapes:
      • 11-5
      • 11-1
      • 1-1
  • Extension Experiments on Continual Classification

    1. ImageNet-100
      • 50-10

Performance

  • Continual Class Segmentation on PASCAL VOC 2012
Method Pub. 15-5 disjoint 15-5 overlapped 15-1 disjoint 15-1 overlapped 10-1 disjoint 10-1 overlapped
LWF TPAMI 2017 54.9 55.0 5.3 5.5 4.3 4.8
ILT ICCVW 2019 58.9 61.3 7.9 9.2 5.4 5.5
MiB CVPR 2020 65.9 70.0 39.9 32.2 6.9 20.1
SDR CVPR 2021 67.3 70.1 48.7 39.5 14.3 25.1
PLOP CVPR 2021 64.3 70.1 46.5 54.6 8.4 30.5
Ours CVPR 2022 67.3 72.4 54.7 59.4 18.2 34.3
  • Continual Class Segmentation on ADE20K
Method Pub. 100-50 overlapped 100-10 overlapped 50-50 overlapped 100-5 overlapped
ILT ICCVW 2019 17.0 1.1 9.7 0.5
MiB CVPR 2020 32.8 29.2 29.3 25.9
PLOP CVPR 2021 32.9 31.6 30.4 28.7
Ours CVPR 2022 34.5 32.1 32.5 29.6
  • Continual Domain Segmentation on Cityscapes
Method Pub. 11-5 11-1 1-1
LWF TPAMI 2017 59.7 57.3 33.0
LWF-MC CVPR 2017 58.7 57.0 31.4
ILT ICCVW 2019 59.1 57.8 30.1
MiB CVPR 2020 61.5 60.0 42.2
PLOP CVPR 2021 63.5 62.1 45.2
Ours CVPR 2022 64.3 63.0 48.9

Dataset Prepare

  • PASCVAL VOC 2012
    sh data/download_voc.sh
  • ADE20K
    sh data/download_ade.sh
  • Cityscapes
    sh data/download_cityscapes.sh

Environment

  1. conda install --yes --file requirements.txt
  2. Install inplace-abn

Training

  1. Dowload pretrained model from ResNet-101_iabn to pretrained/
  2. We have prepared some training scripts in scripts/. You can train the model by
sh scripts/voc/rcil_10-1-overlap.sh

Inference

You can simply modify the bash file by add --test, like

CUDA_VISIBLE_DEVICES=${GPU} python3 -m torch.distributed.launch --master_port ${PORT} --nproc_per_node=${NB_GPU} run.py --data xxx ... --test

Reference

If this work is useful for you, please cite us by:

@inproceedings{zhangCvpr22ContinuSSeg,
  title={Representation Compensation Networks for Continual Semantic Segmentation},
  author={Chang-Bin Zhang and Jiawen Xiao and Xialei Liu and Yingcong Chen and Ming-Ming Cheng},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

Connect

If you have any questions about this work, please feel easy to connect with us (zhangchbin ^ gmail.com).

Thanks

This code is heavily borrowed from [MiB] and [PLOP].

Awesome Continual Segmentation

There is a collection of AWESOME things about continual semantic segmentation, including papers, code, demos, etc. Feel free to pull request and star.

2022

  • Representation Compensation Networks for Continual Semantic Segmentation [CVPR 2022] [PyTorch]
  • Self-training for Class-incremental Semantic Segmentation [TNNLS 2022] [PyTorch]
  • Uncertainty-aware Contrastive Distillation for Incremental Semantic Segmentation [TPAMI 2022] [[PyTorch]]

2021

  • PLOP: Learning without Forgetting for Continual Semantic Segmentation [CVPR 2021] [PyTorch]
  • Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations [CVPR2021] [PyTorch]
  • An EM Framework for Online Incremental Learning of Semantic Segmentation [ACM MM 2021] [PyTorch]
  • SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning [NeurIPS 2021] [PyTorch]

2020

2019

You might also like...
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Siamese-nn-semantic-text-similarity - A repository containing comprehensive Neural Networks based PyTorch implementations for the semantic text similarity task
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)
PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)

PSTR (CVPR2022) This code is an official implementation of "PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)". End-to-end one-step

CVPR2022 paper
CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"

[CVPR2022] DSL: Dense Learning based Semi-Supervised Object Detection DSL is the first work on Anchor-Free detector for Semi-Supervised Object Detecti

[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos
[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos Created by Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie

The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Official code for
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Comments
  • Reproduce ADE20k

    Reproduce ADE20k

    Hi, thanks for sharing the code.

    I'm trying to reproduce the results for 100-50 ADE20k. Here are the hyper-parameters I used: --pod local --pod_factor 0.001 --pod_logits --classif_adaptive_factor --init_balanced --unce --unkd

    I get the all-mIoU=29.4%, which is much lower than the reported mIoU (34.5%). Could you please share with me the parameters you used to get the reported mIoU?

    opened by HieuPhan33 10
  • 15-1 Pascal-VOC Reproduce

    15-1 Pascal-VOC Reproduce

    Hi, I couldn't reproduce the results for 15-1 Pascal-VOC. I'm running the script voc/plop_15-1-overlap.sh. Since I have two GPUs with 24GB, I adjust the batch size to 12 and trained on 2 GPUs. This ensures the total batch size is 24 like your settings.

    Here are the results | | 0-15 | 16-20 | all | | ---- | ---- | --- | ---- | | Reproduce | 63.41 | 19.25 | 52.90 | | Reported | 70.60 | 23.70 | 59.40 |

    The results are far lower than the results reported in the paper. Could you please advise?

    opened by HieuPhan33 6
  • Reproduced results lower than the reported ones

    Reproduced results lower than the reported ones

    Hi, I directly ran the released codes without any modification. However, I found that the obtained results are lower than the reported ones by >1 percent point, especially the 10-1 setting with a large gap on the base (0-10) classes.

    Relevant log files are provided for your reference. Could you advise the possible reasons that may cause such a problem? Thanks a lot.

    | | 15-5 | | | 15-1 | | | 10-1 | | | |------------|------|-------|------|------|-------|------|------|-------|------| | | 0-15 | 16-20 | all | 0-15 | 16-20 | all | 0-10 | 11-20 | all | | Reported | 78.8 | 52.0 | 72.4 | 70.6 | 23.7 | 59.4 | 55.4 | 15.1 | 34.3 | | Reproduced | 76.7 | 48.4 | 70.0 | 69.0 | 20.5 | 57.4 | 38.0 | 13.4 | 26.3 |

    opened by Ze-Yang 3
  • Full results on Cityscapes

    Full results on Cityscapes

    Nice work! Could you publish the scripts and the corresponding results on Cityscapes? I failed to reproduce the experimental results reported in the paper. I set the batch size as 24. The initial learning rate is 0.02 for the first training step and 0.001 for the next continual learning steps. I train the model for each step with 50 epochs as the paper suggested.

    opened by XiaorongLi-95 4
Owner
Chang-Bin Zhang
Master student at Nankai University.
Chang-Bin Zhang
Airbus Ship Detection Challenge

Airbus Ship Detection Challenge This is an open solution to the Airbus Ship Detection Challenge. Our goals We are building entirely open solution to t

minerva.ml 55 Nov 29, 2022
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
Let's create a tool to convert Thailand budget from PDF to CSV.

thailand-budget-pdf2csv Let's create a tool to convert Thailand Government Budgeting from PDF to CSV! รวมพลัง Dev แปลงงบ จาก PDF สู่ Machine-readable

Kao.Geek 88 Dec 19, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 05, 2023
Fast algorithms to compute an approximation of the minimal volume oriented bounding box of a point cloud in 3D.

ApproxMVBB Status Build UnitTests Homepage Fast algorithms to compute an approximation of the minimal volume oriented bounding box of a point cloud in

Gabriel Nützi 390 Dec 31, 2022
A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains (IJCV submission)

wsss-analysis The code of: A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains, arXiv pre-print 2019 paper.

Lyndon Chan 48 Dec 18, 2022
Pytorch version of SfmLearner from Tinghui Zhou et al.

SfMLearner Pytorch version This codebase implements the system described in the paper: Unsupervised Learning of Depth and Ego-Motion from Video Tinghu

Clément Pinard 909 Dec 22, 2022
Converts given image (png, jpg, etc) to amogus gif.

Image to Amogus Converter Converts given image (.png, .jpg, etc) to an amogus gif! Usage Place image in the /target/ folder (or anywhere realistically

Hank Magan 1 Nov 24, 2021
Compact Bilinear Pooling for PyTorch

Compact Bilinear Pooling for PyTorch. This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch. This

Grégoire Payen de La Garanderie 234 Dec 07, 2022
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Ibai Gorordo 11 Nov 15, 2022
SLAMP: Stochastic Latent Appearance and Motion Prediction

SLAMP: Stochastic Latent Appearance and Motion Prediction Official implementation of the paper SLAMP: Stochastic Latent Appearance and Motion Predicti

Kaan Akan 34 Dec 08, 2022
A simple Python configuration file operator.

A simple Python configuration file operator This project provides a common way to read configurations using config42. Installation It is possible to i

Scott Lau 2 Nov 08, 2021
OntoProtein: Protein Pretraining With Ontology Embedding

OntoProtein This is the implement of the paper "OntoProtein: Protein Pretraining With Ontology Embedding". OntoProtein is an effective method that mak

ZJUNLP 80 Dec 14, 2022
A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

Evan 1.3k Jan 02, 2023
Learning a mapping from images to psychological similarity spaces with neural networks.

LearningPsychologicalSpaces v0.1: v1.1: v1.2: v1.3: v1.4: v1.5: The code in this repository explores learning a mapping from images to psychological s

Lucas Bechberger 8 Dec 12, 2022
Dual Attention Network for Scene Segmentation (CVPR2019)

Dual Attention Network for Scene Segmentation(CVPR2019) Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu Introduction W

Jun Fu 2.2k Dec 28, 2022
Scenarios, tutorials and demos for Autonomous Driving

The Autonomous Driving Cookbook (Preview) NOTE: This project is developed and being maintained by Project Road Runner at Microsoft Garage. This is cur

Microsoft 2.1k Jan 02, 2023
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!

CoVA: Context-aware Visual Attention for Webpage Information Extraction Abstract Webpage information extraction (WIE) is an important step to create k

Keval Morabia 41 Jan 01, 2023
Original Implementation of Prompt Tuning from Lester, et al, 2021

Prompt Tuning This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lest

Google Research 282 Dec 28, 2022
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual ima

88 Jan 02, 2023