Semi-supervised semantic segmentation needs strong, varied perturbations

Overview

Semi-supervised semantic segmentation using CutMix and Colour Augmentation

Implementations of our papers:

Licensed under MIT license.

Colour augmentation

Please see our new paper for a full discussion, but a summary of our findings can be found in our [colour augmentation](Colour augmentation.ipynb) Jupyter notebook.

Requirements

We provide an environment.yml file that can be used to re-create a conda environment that provides the required packages:

conda env create -f environment.yml

Then activate with:

conda activate cutmix_semisup_seg

(note: this will not install the library needed to use the PSPNet architecture; see below)

In general we need:

  • Python >= 3.6
  • PyTorch >= 1.4
  • torchvision 0.5
  • OpenCV
  • Pillow
  • Scikit-image
  • Scikit-learn
  • click
  • tqdm
  • Jupyter notebook for the notebooks
  • numpy 1.18

Requirements for PSPNet

To use the PSPNet architecture (see Pyramid Scene Parsing Network by Zhao et al.), you will need to install the logits-from_models branch of https://github.com/Britefury/semantic-segmentation-pytorch:

pip install git+https://github.com/Britefury/[email protected]

Datasets

You need to:

  1. Download/acquire the datsets
  2. Write the config file semantic_segmentation.cfg giving their paths
  3. Convert them if necessary; the CamVid, Cityscapes and ISIC 2017 datasets must be converted to a ZIP-based format prior to use. You must run the provided conversion utilities to create these ZIP files.

Dataset preparation instructions can be found here.

Running the experiments

We provide four programs for running experiments:

  • train_seg_semisup_mask_mt.py: mask driven consistency loss (the main experiment)
  • train_seg_semisup_aug_mt.py: augmentation driven consistency loss; used to attempt to replicate the ISIC 2017 baselines of Li et al.
  • train_seg_semisup_ict.py: Interpolation Consistency Training; a baseline for contrast with our main approach
  • train_seg_semisup_vat_mt.py: Virtual Adversarial Training adapted for semantic segmentation

They can be configured via command line arguments that are described here.

Shell scripts

To replicate our results, we provide shell scripts to run our experiments.

Cityscapes
> sh run_cityscapes_experiments.sh <run> <split_rng_seed>

where <run> is the name of the run and <split_rng_seed> is an integer RNG seed used to select the supervised samples. Please see the comments at the top of run_cityscapes_experiments.sh for further explanation.

To re-create the 5 runs we used for our experiments:

> sh run_cityscapes_experiments.sh 01 12345
> sh run_cityscapes_experiments.sh 02 23456
> sh run_cityscapes_experiments.sh 03 34567
> sh run_cityscapes_experiments.sh 04 45678
> sh run_cityscapes_experiments.sh 05 56789
Pascal VOC 2012 (augmented)
> sh run_pascal_aug_experiments.sh <n_supervised> <n_supervised_txt>

where <n_supervised> is the number of supervised samples and <n_supervised_txt> is that number as text. Please see the comments at the top of run_pascal_aug_experiments.sh for further explanation.

We use the same data split as Mittal et al. It is stored in data/splits/pascal_aug/split_0.pkl that is included in the repo.

Pascal VOC 2012 (augmented) with DeepLab v3+
> sh run_pascal_aug_deeplab3plus_experiments.sh <n_supervised> <n_supervised_txt>
ISIC 2017 Segmentation
> sh run_isic2017_experiments.sh <run> <split_rng_seed>

where <run> is the name of the run and <split_rng_seed> is an integer RNG seed used to select the supervised samples. Please see the comments at the top of run_isic2017_experiments.sh for further explanation.

To re-create the 5 runs we used for our experiments:

> sh run_isic2017_experiments.sh 01 12345
> sh run_isic2017_experiments.sh 02 23456
> sh run_isic2017_experiments.sh 07 78901
> sh run_isic2017_experiments.sh 08 89012
> sh run_isic2017_experiments.sh 09 90123

In early experiments, we test 10 seeds and selected the middle 5 when ranked in terms of performance, hence the specific seed choice.

Exploring the input data distribution present in semantic segmentation problems

Cluster assumption

First we examine the input data distribution presented by semantic segmentation problems with a view to determining if the low density separation assumption holds, in the notebook Semantic segmentation input data distribution.ipynb This notebook also contains the code used to generate the images from Figure 1 in the paper.

Inter-class and intra-class variance

Secondly we examine the inter-class and intra-class distance (as a proxy for inter-class and intra-class variance) in the notebook Plot inter-class and intra-class distances from files.ipynb

Note that running the second notebook requires that you generate some data files using the intra_inter_class_patch_dist.py program.

Toy 2D experiments

The toy 2D experiments used to produce Figure 3 in the paper can be run using the toy2d_train.py program, which is documented here.

You can re-create the toy 2D experiments by running the run_toy2d_experiments.sh shell script:

> sh run_toy2d_experiments.sh <run>
Code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2021

The repo provides the code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2

Yuning Mao 18 May 24, 2022
Implementation of the paper "Generating Symbolic Reasoning Problems with Transformer GANs"

Generating Symbolic Reasoning Problems with Transformer GANs This is the implementation of the paper Generating Symbolic Reasoning Problems with Trans

Reactive Systems Group 1 Apr 18, 2022
SOFT: Softmax-free Transformer with Linear Complexity, NeurIPS 2021 Spotlight

SOFT: Softmax-free Transformer with Linear Complexity SOFT: Softmax-free Transformer with Linear Complexity, Jiachen Lu, Jinghan Yao, Junge Zhang, Xia

Fudan Zhang Vision Group 272 Dec 25, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
PyTorch code accompanying the paper "Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning" (NeurIPS 2021).

HIGL This is a PyTorch implementation for our paper: Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning (NeurIPS 2021). Our cod

Junsu Kim 20 Dec 14, 2022
A library for implementing Decentralized Graph Neural Network algorithms.

decentralized-gnn A package for implementing and simulating decentralized Graph Neural Network algorithms for classification of peer-to-peer nodes. De

Multimedia Knowledge and Social Analytics Lab 5 Nov 07, 2022
[IROS2021] NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences

NYU-VPR This repository provides the experiment code for the paper Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymiza

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 22 Sep 28, 2022
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

MiVOS (CVPR 2021) - Mask Propagation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] [Papers with Code] This repo impleme

Rex Cheng 106 Jan 03, 2023
Transformer part of 12th place solution in Riiid! Answer Correctness Prediction

kaggle_riiid Transformer part of 12th place solution in Riiid! Answer Correctness Prediction. Please see here for more information. Execution You need

Sakami Kosuke 2 Apr 23, 2022
Learnable Boundary Guided Adversarial Training (ICCV2021)

Learnable Boundary Guided Adversarial Training This repository contains the implementation code for the ICCV2021 paper: Learnable Boundary Guided Adve

DV Lab 27 Sep 25, 2022
Official implementation of Self-supervised Image-to-text and Text-to-image Synthesis

Self-supervised Image-to-text and Text-to-image Synthesis This is the official implementation of Self-supervised Image-to-text and Text-to-image Synth

6 Jul 31, 2022
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 05, 2023
Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Instrument Recognition.

Music Trees Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Ins

Hugo Flores García 32 Nov 22, 2022
Rapid experimentation and scaling of deep learning models on molecular and crystal graphs.

LitMatter A template for rapid experimentation and scaling deep learning models on molecular and crystal graphs. How to use Clone this repository and

Nathan Frey 32 Dec 06, 2022
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

6.5k Jan 01, 2023
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

DV Lab 29 Nov 21, 2022
This repository contains a pytorch implementation of "StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision".

StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision | Project Page | Paper | This repository contains a pytorch implementation of "St

87 Dec 09, 2022
Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Shellphish 791 Jan 06, 2023
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming

Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification.

YerevaNN 75 Nov 06, 2022
Official PyTorch implementation of RobustNet (CVPR 2021 Oral)

RobustNet (CVPR 2021 Oral): Official Project Webpage Codes and pretrained models will be released soon. This repository provides the official PyTorch

Sungha Choi 173 Dec 21, 2022