The official homepage of the COCO-Stuff dataset.

Overview

The COCO-Stuff dataset

Holger Caesar, Jasper Uijlings, Vittorio Ferrari

COCO-Stuff example annotations

Welcome to official homepage of the COCO-Stuff [1] dataset. COCO-Stuff augments all 164K images of the popular COCO [2] dataset with pixel-level stuff annotations. These annotations can be used for scene understanding tasks like semantic segmentation, object detection and image captioning.

Overview

Highlights

  • 164K complex images from COCO [2]
  • Dense pixel-level annotations
  • 80 thing classes, 91 stuff classes and 1 class 'unlabeled'
  • Instance-level annotations for things from COCO [2]
  • Complex spatial context between stuff and things
  • 5 captions per image from COCO [2]

Research Paper

COCO-Stuff: Thing and Stuff Classes in Context
H. Caesar, J. Uijlings, V. Ferrari,
In Computer Vision and Pattern Recognition (CVPR), 2018.
[paper][bibtex]

Versions of COCO-Stuff

  • COCO-Stuff dataset: The final version of COCO-Stuff, that is presented on this page. It includes all 164K images from COCO 2017 (train 118K, val 5K, test-dev 20K, test-challenge 20K). It covers 172 classes: 80 thing classes, 91 stuff classes and 1 class 'unlabeled'. This dataset will form the basis of all upcoming challenges.
  • COCO 2017 Stuff Segmentation Challenge: A semantic segmentation challenge on 55K images (train 40K, val 5K, test-dev 5K, test-challenge 5K) of COCO. To focus on stuff, we merged all 80 thing classes into a single class 'other'. The results of the challenge were presented at the Joint COCO and Places Recognition Workshop at ICCV 2017.
  • COCO-Stuff 10K dataset: Our first dataset, annotated by 10 in-house annotators at the University of Edinburgh. It includes 10K images from the training set of COCO. We provide a 9K/1K (train/val) split to make results comparable. The dataset includes 80 thing classes, 91 stuff classes and 1 class 'unlabeled'. This was initially presented as 91 thing classes, but is now changed to 80 thing classes, as 11 classes do not have any segmentation annotations in COCO. This dataset is a subset of all other releases.

Downloads

Filename Description Size
train2017.zip COCO 2017 train images (118K images) 18 GB
val2017.zip COCO 2017 val images (5K images) 1 GB
stuffthingmaps_trainval2017.zip Stuff+thing PNG-style annotations on COCO 2017 trainval 659 MB
stuff_trainval2017.zip Stuff-only COCO-style annotations on COCO 2017 trainval 543 MB
annotations_trainval2017.zip Thing-only COCO-style annotations on COCO 2017 trainval 241 MB
labels.md Indices, names, previews and descriptions of the classes in COCO-Stuff <10 KB
labels.txt Machine readable version of the label list <10 KB
README.md This readme <10 KB

To use this dataset you will need to download the images (18+1 GB!) and annotations of the trainval sets. To download earlier versions of this dataset, please visit the COCO 2017 Stuff Segmentation Challenge or COCO-Stuff 10K.

Caffe-compatible stuff-thing maps We suggest using the stuffthingmaps, as they provide all stuff and thing labels in a single .png file per image. Note that the .png files are indexed images, which means they store only the label indices and are typically displayed as grayscale images. To be compatible with most Caffe-based semantic segmentation methods, thing+stuff labels cover indices 0-181 and 255 indicates the 'unlabeled' or void class.

Separate stuff and thing downloads Alternatively you can download the separate files for stuff and thing annotations in COCO format, which are compatible with the COCO-Stuff API. Note that the stuff annotations contain a class 'other' with index 183 that covers all non-stuff pixels.

Setup

Use the following instructions to download the COCO-Stuff dataset and setup the folder structure. The instructions are for Ubuntu and require git, wget and unzip. On other operating systems the commands may differ:

# Get this repo
git clone https://github.com/nightrome/cocostuff.git
cd cocostuff

# Download everything
wget --directory-prefix=downloads http://images.cocodataset.org/zips/train2017.zip
wget --directory-prefix=downloads http://images.cocodataset.org/zips/val2017.zip
wget --directory-prefix=downloads http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip

# Unpack everything
mkdir -p dataset/images
mkdir -p dataset/annotations
unzip downloads/train2017.zip -d dataset/images/
unzip downloads/val2017.zip -d dataset/images/
unzip downloads/stuffthingmaps_trainval2017.zip -d dataset/annotations/

Results

Below we present results on different releases of COCO-Stuff. If you would like to see your results here, please contact the first author.

Results on the val set of COCO-Stuff:

Method Source Class accuracy Pixel accuracy Mean IOU FW IOU
Deeplab VGG-16 (no CRF) [4] [1] 45.1% 63.6% 33.2% 47.6%

Note that the results between the 10K dataset and the full dataset are not direclty comparable, as different train and val images are used. Furthermore, on the full dataset we train Deeplab for 100K iterations [1], compared to 20K iterations on the 10K dataset [1b].

Results on the val set of the COCO 2017 Stuff Segmentation Challenge:

We show results on the val set of the challenge. Please refer to the official leaderboard for results on the test-dev and test-challenge sets. Note that these results are not comparable to other COCO-Stuff results, as the challenge only includes a single thing class 'other'.

Method Source Class accuracy Pixel accuracy Mean IOU FW IOU
Inplace-ABN sync [8] - - 24.9% -

Results on the val set of COCO-Stuff 10K:

Method Source Class accuracy Pixel accuracy Mean IOU FW IOU
FCN-16s [3] [1b] 34.0% 52.0% 22.7% -
Deeplab VGG-16 (no CRF) [4] [1b] 38.1% 57.8% 26.9% -
FCN-8s [3] [6] 38.5% 60.4% 27.2% -
SCA VGG-16 [7] [7] 42.5% 61.6% 29.1% -
DAG-RNN + CRF [6] [6] 42.8% 63.0% 31.2% -
DC + FCN+ [5] [5] 44.6% 65.5% 33.6% 50.6%
Deeplab ResNet (no CRF) [4] - 45.5% 65.1% 34.4% 50.4%
CCL ResNet-101 [10] [10] 48.8% 66.3% 35.7% -
DSSPN ResNet finetune [9] [9] 48.1% 69.4% 37.3% -
* OHE + DC + FCN+ [5] [5] 45.8% 66.6% 34.3% 51.2%
* W2V + DC + FCN+ [5] [5] 45.1% 66.1% 34.7% 51.0%
* DSSPN ResNet universal [9] [9] 50.3% 70.7% 38.9% -

* Results not comparable as they use external data

Labels

Label Names & Indices

To be compatible with COCO, COCO-Stuff has 91 thing classes (1-91), 91 stuff classes (92-182) and 1 class "unlabeled" (0). Note that 11 of the thing classes of COCO do not have any segmentation annotations (blender, desk, door, eye glasses, hair brush, hat, mirror, plate, shoe, street sign, window). The classes desk, door, mirror and window could be either stuff or things and therefore occur in both COCO and COCO-Stuff. To avoid confusion we add the suffix "-stuff" or "-other" to those classes in COCO-Stuff. The full list of classes and their descriptions can be found here.

Label Hierarchy

This figure shows the label hierarchy of COCO-Stuff including all stuff and thing classes: COCO-Stuff label hierarchy

Semantic Segmentation Models (stuff+things)

PyTorch model

We recommend this third party re-implementation of Deeplab v2 in PyTorch. Contrary to our Caffe model, it supports ResNet and CRFs. The authors provide setup routines and models for COCO-Stuff 164K. Please file any issues or questions on the project's GitHub page.

Caffe model

Here we provide the Caffe-based segmentation model used in the COCO-Stuff paper. However, for users not familiar with Caffe we recommend the above PyTorch model. Before using the semantic segmentation model, please setup the dataset. The commands below download and install Deeplab (incl. Caffe), download or train the model and predictions and evaluate the performance. The results should be the same as in the table. Due to several issues, we do not provide the Deeplab ResNet101 model, but some code for it can be found in this folder.

# Get and install Deeplab (you may need to change settings)
# We use a special version of Deeplab v2 that supports CuDNN v5, but others may work as well.
git submodule update --init models/deeplab/deeplab-v2
cd models/deeplab/deeplab-v2
cp Makefile.config.example Makefile.config
make all -j8

# Create symbolic links to the images and annotations
cd models/deeplab/cocostuff/data && ln -s ../../../../dataset/images images && ln -s ../../../../dataset/annotations annotations && cd ../../../..

# Option 1: Download the initial model
# wget --directory-prefix=models/deeplab/cocostuff/model/deeplabv2_vgg16 http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplabv2_vgg16_init.caffemodel

# Option 2: Download the trained model
# wg --directory-prefix=downloads http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplab_cocostuff_trainedmodel.zip
# zip downloads/deeplab_cocostuff_trainedmodel.zip -d models/deeplab/cocostuff/model/deeplabv2_vgg16/model120kimages/

# Option 3: Run training & test
# cd models/deeplab && ./run_cocostuff_vgg16.sh && cd ../..

# Option 4 (fastest): Download predictions
wget --directory-prefix=downloads http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplab_predictions_cocostuff_val2017.zip
unzip downloads/deeplab_predictions_cocostuff_val2017.zip -d models/deeplab/cocostuff/features/deeplabv2_vgg16/model120kimages/val/fc8/

# Evaluate performance
python models/deeplab/evaluate_performance.py

The table below summarizes the files used in these instructions:

Filename Description Size
deeplabv2_vgg16_init.caffemodel Deeplab VGG-16 pretrained model (original link) 152 MB
deeplab_cocostuff_trainedmodel.zip Deeplab VGG-16 trained on COCO-Stuff 286 MB
deeplab_predictions_cocostuff_val2017.zip Deeplab VGG-16 predictions on COCO-Stuff 54 MB

Note that the Deeplab predictions need to be rotated and cropped, as shown in this script.

Annotation Tool

For the Matlab annotation tool used to annotate the initial 10K images, please refer to this repository.

Misc

References

Licensing

COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:

Acknowledgements

This work is supported by the ERC Starting Grant VisCul. The annotations were done by the crowdsourcing startup Mighty AI, and financed by Mighty AI and the Common Visual Data Foundation.

Contact

If you have any questions regarding this dataset, please contact us at holger-at-it-caesar.com.

Owner
Holger Caesar
Author of the COCO-Stuff and nuScenes datasets.
Holger Caesar
Efficient Multi Collection Style Transfer Using GAN

Proposed a new model that can make style transfer from single style image, and allow to transfer into multiple different styles in a single model.

Zhaozheng Shen 2 Jan 15, 2022
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

Yonglong Tian 2.2k Jan 08, 2023
Deep-Learning-Book-Chapter-Summaries - Attempting to make the Deep Learning Book easier to understand.

Deep-Learning-Book-Chapter-Summaries This repository provides a summary for each chapter of the Deep Learning book by Ian Goodfellow, Yoshua Bengio an

Aman Dalmia 1k Dec 27, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
Novel and high-performance medical image classification pipelines are heavily utilizing ensemble learning strategies

An Analysis on Ensemble Learning optimized Medical Image Classification with Deep Convolutional Neural Networks Novel and high-performance medical ima

14 Dec 18, 2022
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

NVIDIA Isaac ROS 62 Dec 14, 2022
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

TVT Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation Datasets: Digit: MNIST, SVHN, USPS Object: Office, Office-Home, Vi

37 Dec 15, 2022
Code for approximate graph reduction techniques for cardinality-based DSFM, from paper

SparseCard Code for approximate graph reduction techniques for cardinality-based DSFM, from paper "Approximate Decomposable Submodular Function Minimi

Nate Veldt 1 Nov 25, 2022
Dense Unsupervised Learning for Video Segmentation (NeurIPS*2021)

Dense Unsupervised Learning for Video Segmentation This repository contains the official implementation of our paper: Dense Unsupervised Learning for

Visual Inference Lab @TU Darmstadt 173 Dec 26, 2022
Human Pose estimation with TensorFlow framework

Human Pose Estimation with TensorFlow Here you can find the implementation of the Human Body Pose Estimation algorithm, presented in the DeeperCut and

Eldar Insafutdinov 1.1k Dec 29, 2022
TensorFlow tutorials and best practices.

Effective TensorFlow 2 Table of Contents Part I: TensorFlow 2 Fundamentals TensorFlow 2 Basics Broadcasting the good and the ugly Take advantage of th

Vahid Kazemi 8.7k Dec 31, 2022
GAN Image Generator and Characterwise Image Recognizer with python

MODEL SUMMARY 모델의 구조는 크게 6단계로 나뉩니다. STEP 0: Input Image Predict 할 이미지를 모델에 입력합니다. STEP 1: Make Black and White Image STEP 1 은 입력받은 이미지의 글자를 흑색으로, 배경을

Juwan HAN 1 Feb 09, 2022
Simple torch.nn.module implementation of Alias-Free-GAN style filter and resample

Alias-Free-Torch Simple torch module implementation of Alias-Free GAN. This repository including Alias-Free GAN style lowpass sinc filter @filter.py A

이준혁(Junhyeok Lee) 64 Dec 22, 2022
Predictive Modeling on Electronic Health Records(EHR) using Pytorch

Predictive Modeling on Electronic Health Records(EHR) using Pytorch Overview Although there are plenty of repos on vision and NLP models, there are ve

81 Jan 01, 2023
DeepStochlog Package For Python

DeepStochLog Installation Installing SWI Prolog DeepStochLog requires SWI Prolog to run. Run the following commands to install: sudo apt-add-repositor

KU Leuven Machine Learning Research Group 17 Dec 23, 2022
BRNet - code for Automated assessment of BI-RADS categories for ultrasound images using multi-scale neural networks with an order-constrained loss function

BRNet code for "Automated assessment of BI-RADS categories for ultrasound images using multi-scale neural networks with an order-constrained loss func

Yong Pi 2 Mar 09, 2022
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 01, 2023
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos • Key Features • Examples • Install • Support • Docs • Issues • License • Download Talos radically changes

Autonomio 1.6k Dec 15, 2022