CPPE - 5 (Medical Personal Protective Equipment) is a new challenging object detection dataset

Overview

CPPE - 5 Twitter

GitHub Repo stars PyPI Code style: black

CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.

Accompanying paper: CPPE - 5: Medical Personal Protective Equipment Dataset

by Rishit Dagli and Ali Mustufa Shaikh.

Some features of this dataset are:

  • high quality images and annotations (~4.6 bounding boxes per image)
  • real-life images unlike any current such dataset
  • majority of non-iconic images (allowing easy deployment to real-world environments)
  • >15 pre-trained models in the model zoo availaible to directly use (also for mobile and edge devices)

Get the data

We strongly recommend you use either the downlaoder script or the Python package to download the dataset however you could also download and extract it manually.

Name Size Drive Bucket MD5 checksum
dataset.tar.gz ~230 MB Download Download f4e043f983cff94ef82ef7d57a879212

Downloader Script

The easiest way to download the dataset is to use the downloader script:

git clone https://github.com/Rishit-dagli/CPPE-Dataset.git
cd CPPE-Dataset
bash tools/download.sh

Python package

You can also use the Python package to get the dataset:

pip install cppe5
import cppe5
cppe5.download_data()

Labels

The dataset contains the following labels:

Label Description
1 Coverall
2 Face_Shield
3 Gloves
4 Goggles
5 Mask

Model Zoo

More information about the pre-trained models (like modlel complexity or FPS benchmark) could be found in MODEL_ZOO.md and LITE_MODEL_ZOO.md includes models ready for deployment on mobile and edge devices.

Baseline Models

This section contains the baseline models that are trained on the CPPE-5 dataset . More information about how these are trained could be found in the original paper and the config files.

Method APbox AP50box AP75box APSbox APMbox APLbox Configs TensorBoard.dev PyTorch model TensorFlow model
SSD 29.50 57.0 24.9 32.1 23.1 34.6 config tb.dev bucket bucket
YOLO 38.5 79.4 35.3 23.1 28.4 49.0 config tb.dev bucket bucket
Faster RCNN 44.0 73.8 47.8 30.0 34.7 52.5 config tb.dev bucket bucket

SoTA Models

This section contains the SoTA models that are trained on the CPPE-5 dataset . More information about how these are trained could be found in the original paper and the config files.

Method APbox AP50box AP75box APSbox APMbox APLbox Configs TensorBoard.dev PyTorch model TensorFlow model
RepPoints 43.0 75.9 40.1 27.3 36.7 48.0 config tb.dev bucket -
Sparse RCNN 44.0 69.6 44.6 30.0 30.6 54.7 config tb.dev bucket -
FCOS 44.4 79.5 45.9 36.7 39.2 51.7 config tb.dev bucket bucket
Grid RCNN 47.5 77.9 50.6 43.4 37.2 54.4 config tb.dev bucket -
Deformable DETR 48.0 76.9 52.8 36.4 35.2 53.9 config tb.dev bucket -
FSAF 49.2 84.7 48.2 45.3 39.6 56.7 config tb.dev bucket bucket
Localization Distillation 50.9 76.5 58.8 45.8 43.0 59.4 config tb.dev bucket -
VarifocalNet 51.0 82.6 56.7 39.0 42.1 58.8 config tb.dev bucket -
RegNet 51.3 85.3 51.8 35.7 41.1 60.5 config tb.dev bucket bucket
Double Heads 52.0 87.3 55.2 38.6 41.0 60.8 config tb.dev bucket -
DCN 51.6 87.1 55.9 36.3 41.4 61.3 config tb.dev bucket -
Empirical Attention 52.5 86.5 54.1 38.7 43.4 61.0 config tb.dev bucket -
TridentNet 52.9 85.1 58.3 42.6 41.3 62.6 config tb.dev bucket bucket

Tools

We also include the following tools in this repository to make working with the dataset a lot easier:

  • Download data
  • Download TF Record files
  • Convert PNG images in dataset to JPG Images
  • Converting Pascal VOC to COCO format
  • Update dataset to use relative paths

More information about each tool can be found in the tools/README.md file.

Tutorials

We also present some tutorials on how to use the dataset in this repository as Colab notebooks:

In this notebook we will load the CPPE - 5 dataset in PyTorch and also see a quick example of fine-tuning the Faster RCNN model with torchvision on this dataset.

In this notebook we will load the CPPE - 5 dataset through TF Record files in TensorFlow.

In this notebook, we will visualize the CPPE-5 dataset, which could be really helpful to see some sample images and annotations from the dataset.

Citation

If you use this dataset, please cite the following paper:

[WIP]

Acknoweldgements

The authors would like to thank Google for supporting this work by providing Google Cloud credits. The authors would also like to thank Google TPU Research Cloud (TRC) program for providing access to TPUs. The authors are also grateful to Omkar Agrawal for help with verifying the difficult annotations.

Want to Contribute πŸ™‹β€β™‚οΈ ?

Awesome! If you want to contribute to this project, you're always welcome! See Contributing Guidelines. You can also take a look at open issues for getting more information about current or upcoming tasks.

Want to discuss? πŸ’¬

Have any questions, doubts or want to present your opinions, views? You're always welcome. You can start discussions.

Have you used this work in your paper, blog, experiments, or more please share it with us by making a discussion under the Show and Tell category.

Comments
  • [ImgBot] Optimize images

    [ImgBot] Optimize images

    Beep boop. Your images are optimized!

    Your image file size has been reduced by 10% πŸŽ‰

    Details

    | File | Before | After | Percent reduction | |:--|:--|:--|:--| | /media/image_vs_sqrt_width_height.png | 13.00kb | 7.78kb | 40.13% | | /media/image_vs_width_height.png | 12.11kb | 8.02kb | 33.77% | | /media/flops.png | 443.40kb | 376.09kb | 15.18% | | /media/non_iconic_and_iconic.png | 5,313.39kb | 4,531.29kb | 14.72% | | /media/params.png | 483.86kb | 413.81kb | 14.48% | | /media/image_stats.png | 28.35kb | 25.94kb | 8.47% | | /media/annotation_type.png | 2,166.55kb | 2,065.88kb | 4.65% | | /media/sample_images.jpg | 2,128.97kb | 2,091.09kb | 1.78% | | | | | | | Total : | 10,589.62kb | 9,519.91kb | 10.10% |


    πŸ“ docs | :octocat: repo | πŸ™‹πŸΎ issues | πŸͺ marketplace

    ~Imgbot - Part of Optimole family

    opened by imgbot[bot] 0
  • [ImgBot] Optimize images

    [ImgBot] Optimize images

    Beep boop. Your images are optimized!

    Your image file size has been reduced by 10% πŸŽ‰

    Details

    | File | Before | After | Percent reduction | |:--|:--|:--|:--| | /media/image_vs_sqrt_width_height.png | 13.00kb | 7.78kb | 40.13% | | /media/image_vs_width_height.png | 12.11kb | 8.02kb | 33.77% | | /media/non_iconic_and_iconic.png | 5,313.39kb | 4,531.29kb | 14.72% | | /media/model_complexity.png | 17.24kb | 15.42kb | 10.57% | | /media/image_stats.png | 28.35kb | 25.94kb | 8.47% | | /media/annotation_type.png | 2,166.55kb | 2,065.88kb | 4.65% | | /media/sample_images.jpg | 2,128.97kb | 2,091.09kb | 1.78% | | | | | | | Total : | 9,679.60kb | 8,745.43kb | 9.65% |


    πŸ“ docs | :octocat: repo | πŸ™‹πŸΎ issues | πŸͺ marketplace

    ~Imgbot - Part of Optimole family

    opened by imgbot[bot] 0
  • Update annotations on data_loader

    Update annotations on data_loader

    :camera: Screenshots

    Changes

    :page_facing_up: Context

    I realized in your code before, that you just assign '1' as the labels for each object. This is proved by creating a tensor of ones for labels like this labels = torch.ones((num_objs,), dtype=torch.int64). When I tried my model to do inference on my sample image, I got the labels '1' for each object and then I realized there was something wrong with the dataset.

    :pencil: Changes

    I just add a little bit of code on your custom Cppe dataset in torch.py. Now, the labels not only '1' for each object in an image, but also have a correspondence with each object based on your dataset.

    :paperclip: Related PR

    :no_entry_sign: Breaking

    None so far.

    :hammer_and_wrench: How to test

    :stopwatch: Next steps

    opened by danielsyahputra 0
  • Request for the test dataset contained 100 images in the paper, thanks

    Request for the test dataset contained 100 images in the paper, thanks

    I want to implement your paper "CPPE - 5: MEDICAL PERSONAL PROTECTIVE EQUIPMENT DATASET" and experiment with it. In the dataset downloaded from your github website, the training set contains 1000 images and the test set contains 29 images. However, I did not find the test set you used in your paper which contains another 100 images. I would highly appreciate it if you could share the test dataset in your paper.

    enhancement 
    opened by pgy1go 0
  • the test dataset in paper request

    the test dataset in paper request

    I want to implement your paper "CPPE - 5: MEDICAL PERSONAL PROTECTIVE EQUIPMENT DATASET" and experiment with it. In the dataset downloaded from your github website, the training set contains 1000 images and the test set contains 29 images. However, I did not find the test set you used in your paper which contains another 100 images. I would highly appreciate it if you could share the test dataset in your paper.

    bug 
    opened by pgy1go 0
  • License Restrictions on dataset

    License Restrictions on dataset

    Hi, please share the dataset license restrictions and image copyright mentions. I would like to use your dataset for a course/book am writing on deep learning.

    Thanks.

    question 
    opened by abhi-kumar 1
Releases(v0.1.0)
  • v0.1.0(Dec 14, 2021)

    CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.

    Some features of this dataset are:

    • high quality images and annotations (~4.6 bounding boxes per image)
    • real-life images unlike any current such dataset
    • majority of non-iconic images (allowing easy deployment to real-world environments)
    • >15 pre-trained models in the model zoo availaible to directly use (also for mobile and edge devices)

    The Python package allows to:

    • download data easily
    • download TF records
    • loading dataset in PyTorch and TensorFlow
    Source code(tar.gz)
    Source code(zip)
Owner
Rishit Dagli
High School,TEDx,2xTED-Ed speaker | International Speaker | Microsoft Student Ambassador | Mentor, @TFUGMumbai | Organize @KotlinMumbai
Rishit Dagli
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
Deep Learning Based Fasion Recommendation System for Ecommerce

Project Name: Fasion Recommendation System for Ecommerce A Deep learning based streamlit web app which can recommened you various types of fasion prod

BAPPY AHMED 13 Dec 13, 2022
An example of Scatterbrain implementation (combining local attention and Performer)

An example of Scatterbrain implementation (combining local attention and Performer)

HazyResearch 97 Jan 02, 2023
This implements one of result networks from Large-scale evolution of image classifiers

Exotic structured image classifier This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. Req

54 Nov 25, 2022
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

Jittor: a Just-in-time(JIT) deep learning framework Quickstart | Install | Tutorial | Chinese Jittor is a high-performance deep learning framework bas

2.7k Jan 03, 2023
Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

41 Jan 06, 2023
Bolt Online Learning Toolbox

Bolt Online Learning Toolbox Bolt features discriminative learning of linear predictors (e.g. SVM or Logistic Regression) using fast online learning a

Peter Prettenhofer 87 Dec 12, 2022
Unsupervised captioning - Code for Unsupervised Image Captioning

Unsupervised Image Captioning by Yang Feng, Lin Ma, Wei Liu, and Jiebo Luo Introduction Most image captioning models are trained using paired image-se

Yang Feng 207 Dec 24, 2022
Production First and Production Ready End-to-End Speech Recognition Toolkit

WeNet δΈ­ζ–‡η‰ˆ Discussions | Docs | Papers | Runtime (x86) | Runtime (android) | Pretrained Models We share neural Net together. The main motivation of WeN

2.7k Jan 04, 2023
This is code to fit per-pixel environment map with spherical Gaussian lobes, using LBFGS optimization

Spherical Gaussian Optimization This is code to fit per-pixel environment map with spherical Gaussian lobes, using LBFGS optimization. This code has b

41 Dec 14, 2022
Python based Advanced AI Assistant

Knick is a virtual artificial intelligence project, fully developed in python. The objective of this project is to develop a virtual assistant that can handle our minor, intermediate as well as heavy

19 Nov 15, 2022
Finding an Unsupervised Image Segmenter in each of your Deep Generative Models

Finding an Unsupervised Image Segmenter in each of your Deep Generative Models Description Recent research has shown that numerous human-interpretable

Luke Melas-Kyriazi 61 Oct 17, 2022
Lightweight, Python library for fast and reproducible experimentation :microscope:

Steppy What is Steppy? Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation. Steppy lets data scientist fo

minerva.ml 134 Jul 10, 2022
Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation"

CoCosNet Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation" (CVPR 2020 oral). Update: 202

Lingbo Yang 38 Sep 22, 2021
Put blind watermark into a text with python

text_blind_watermark Put blind watermark into a text. Can be used in Wechat dingding ... How to Use install pip install text_blind_watermark Alice Pu

ιƒ­ι£ž 164 Dec 30, 2022
This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning (NeurIPS21).

Core-tuning This repository is the official implementation of ``Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regular

vanint 18 Dec 17, 2022
πŸš€ PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"

PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)" Unofficial PyTorch Implementation of Progressi

Vitaliy Hramchenko 58 Dec 19, 2022
A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swar.

Omni-swarm A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm Introduction Omni-swarm is a decentralized omn

HKUST Aerial Robotics Group 99 Dec 23, 2022