CPPE - 5 (Medical Personal Protective Equipment) is a new challenging object detection dataset

Overview

CPPE - 5 Twitter

GitHub Repo stars PyPI Code style: black

CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.

Accompanying paper: CPPE - 5: Medical Personal Protective Equipment Dataset

by Rishit Dagli and Ali Mustufa Shaikh.

Some features of this dataset are:

  • high quality images and annotations (~4.6 bounding boxes per image)
  • real-life images unlike any current such dataset
  • majority of non-iconic images (allowing easy deployment to real-world environments)
  • >15 pre-trained models in the model zoo availaible to directly use (also for mobile and edge devices)

Get the data

We strongly recommend you use either the downlaoder script or the Python package to download the dataset however you could also download and extract it manually.

Name Size Drive Bucket MD5 checksum
dataset.tar.gz ~230 MB Download Download f4e043f983cff94ef82ef7d57a879212

Downloader Script

The easiest way to download the dataset is to use the downloader script:

git clone https://github.com/Rishit-dagli/CPPE-Dataset.git
cd CPPE-Dataset
bash tools/download.sh

Python package

You can also use the Python package to get the dataset:

pip install cppe5
import cppe5
cppe5.download_data()

Labels

The dataset contains the following labels:

Label Description
1 Coverall
2 Face_Shield
3 Gloves
4 Goggles
5 Mask

Model Zoo

More information about the pre-trained models (like modlel complexity or FPS benchmark) could be found in MODEL_ZOO.md and LITE_MODEL_ZOO.md includes models ready for deployment on mobile and edge devices.

Baseline Models

This section contains the baseline models that are trained on the CPPE-5 dataset . More information about how these are trained could be found in the original paper and the config files.

Method APbox AP50box AP75box APSbox APMbox APLbox Configs TensorBoard.dev PyTorch model TensorFlow model
SSD 29.50 57.0 24.9 32.1 23.1 34.6 config tb.dev bucket bucket
YOLO 38.5 79.4 35.3 23.1 28.4 49.0 config tb.dev bucket bucket
Faster RCNN 44.0 73.8 47.8 30.0 34.7 52.5 config tb.dev bucket bucket

SoTA Models

This section contains the SoTA models that are trained on the CPPE-5 dataset . More information about how these are trained could be found in the original paper and the config files.

Method APbox AP50box AP75box APSbox APMbox APLbox Configs TensorBoard.dev PyTorch model TensorFlow model
RepPoints 43.0 75.9 40.1 27.3 36.7 48.0 config tb.dev bucket -
Sparse RCNN 44.0 69.6 44.6 30.0 30.6 54.7 config tb.dev bucket -
FCOS 44.4 79.5 45.9 36.7 39.2 51.7 config tb.dev bucket bucket
Grid RCNN 47.5 77.9 50.6 43.4 37.2 54.4 config tb.dev bucket -
Deformable DETR 48.0 76.9 52.8 36.4 35.2 53.9 config tb.dev bucket -
FSAF 49.2 84.7 48.2 45.3 39.6 56.7 config tb.dev bucket bucket
Localization Distillation 50.9 76.5 58.8 45.8 43.0 59.4 config tb.dev bucket -
VarifocalNet 51.0 82.6 56.7 39.0 42.1 58.8 config tb.dev bucket -
RegNet 51.3 85.3 51.8 35.7 41.1 60.5 config tb.dev bucket bucket
Double Heads 52.0 87.3 55.2 38.6 41.0 60.8 config tb.dev bucket -
DCN 51.6 87.1 55.9 36.3 41.4 61.3 config tb.dev bucket -
Empirical Attention 52.5 86.5 54.1 38.7 43.4 61.0 config tb.dev bucket -
TridentNet 52.9 85.1 58.3 42.6 41.3 62.6 config tb.dev bucket bucket

Tools

We also include the following tools in this repository to make working with the dataset a lot easier:

  • Download data
  • Download TF Record files
  • Convert PNG images in dataset to JPG Images
  • Converting Pascal VOC to COCO format
  • Update dataset to use relative paths

More information about each tool can be found in the tools/README.md file.

Tutorials

We also present some tutorials on how to use the dataset in this repository as Colab notebooks:

In this notebook we will load the CPPE - 5 dataset in PyTorch and also see a quick example of fine-tuning the Faster RCNN model with torchvision on this dataset.

In this notebook we will load the CPPE - 5 dataset through TF Record files in TensorFlow.

In this notebook, we will visualize the CPPE-5 dataset, which could be really helpful to see some sample images and annotations from the dataset.

Citation

If you use this dataset, please cite the following paper:

[WIP]

Acknoweldgements

The authors would like to thank Google for supporting this work by providing Google Cloud credits. The authors would also like to thank Google TPU Research Cloud (TRC) program for providing access to TPUs. The authors are also grateful to Omkar Agrawal for help with verifying the difficult annotations.

Want to Contribute 🙋‍♂️ ?

Awesome! If you want to contribute to this project, you're always welcome! See Contributing Guidelines. You can also take a look at open issues for getting more information about current or upcoming tasks.

Want to discuss? 💬

Have any questions, doubts or want to present your opinions, views? You're always welcome. You can start discussions.

Have you used this work in your paper, blog, experiments, or more please share it with us by making a discussion under the Show and Tell category.

Comments
  • [ImgBot] Optimize images

    [ImgBot] Optimize images

    Beep boop. Your images are optimized!

    Your image file size has been reduced by 10% 🎉

    Details

    | File | Before | After | Percent reduction | |:--|:--|:--|:--| | /media/image_vs_sqrt_width_height.png | 13.00kb | 7.78kb | 40.13% | | /media/image_vs_width_height.png | 12.11kb | 8.02kb | 33.77% | | /media/flops.png | 443.40kb | 376.09kb | 15.18% | | /media/non_iconic_and_iconic.png | 5,313.39kb | 4,531.29kb | 14.72% | | /media/params.png | 483.86kb | 413.81kb | 14.48% | | /media/image_stats.png | 28.35kb | 25.94kb | 8.47% | | /media/annotation_type.png | 2,166.55kb | 2,065.88kb | 4.65% | | /media/sample_images.jpg | 2,128.97kb | 2,091.09kb | 1.78% | | | | | | | Total : | 10,589.62kb | 9,519.91kb | 10.10% |


    📝 docs | :octocat: repo | 🙋🏾 issues | 🏪 marketplace

    ~Imgbot - Part of Optimole family

    opened by imgbot[bot] 0
  • [ImgBot] Optimize images

    [ImgBot] Optimize images

    Beep boop. Your images are optimized!

    Your image file size has been reduced by 10% 🎉

    Details

    | File | Before | After | Percent reduction | |:--|:--|:--|:--| | /media/image_vs_sqrt_width_height.png | 13.00kb | 7.78kb | 40.13% | | /media/image_vs_width_height.png | 12.11kb | 8.02kb | 33.77% | | /media/non_iconic_and_iconic.png | 5,313.39kb | 4,531.29kb | 14.72% | | /media/model_complexity.png | 17.24kb | 15.42kb | 10.57% | | /media/image_stats.png | 28.35kb | 25.94kb | 8.47% | | /media/annotation_type.png | 2,166.55kb | 2,065.88kb | 4.65% | | /media/sample_images.jpg | 2,128.97kb | 2,091.09kb | 1.78% | | | | | | | Total : | 9,679.60kb | 8,745.43kb | 9.65% |


    📝 docs | :octocat: repo | 🙋🏾 issues | 🏪 marketplace

    ~Imgbot - Part of Optimole family

    opened by imgbot[bot] 0
  • Update annotations on data_loader

    Update annotations on data_loader

    :camera: Screenshots

    Changes

    :page_facing_up: Context

    I realized in your code before, that you just assign '1' as the labels for each object. This is proved by creating a tensor of ones for labels like this labels = torch.ones((num_objs,), dtype=torch.int64). When I tried my model to do inference on my sample image, I got the labels '1' for each object and then I realized there was something wrong with the dataset.

    :pencil: Changes

    I just add a little bit of code on your custom Cppe dataset in torch.py. Now, the labels not only '1' for each object in an image, but also have a correspondence with each object based on your dataset.

    :paperclip: Related PR

    :no_entry_sign: Breaking

    None so far.

    :hammer_and_wrench: How to test

    :stopwatch: Next steps

    opened by danielsyahputra 0
  • Request for the test dataset contained 100 images in the paper, thanks

    Request for the test dataset contained 100 images in the paper, thanks

    I want to implement your paper "CPPE - 5: MEDICAL PERSONAL PROTECTIVE EQUIPMENT DATASET" and experiment with it. In the dataset downloaded from your github website, the training set contains 1000 images and the test set contains 29 images. However, I did not find the test set you used in your paper which contains another 100 images. I would highly appreciate it if you could share the test dataset in your paper.

    enhancement 
    opened by pgy1go 0
  • the test dataset in paper request

    the test dataset in paper request

    I want to implement your paper "CPPE - 5: MEDICAL PERSONAL PROTECTIVE EQUIPMENT DATASET" and experiment with it. In the dataset downloaded from your github website, the training set contains 1000 images and the test set contains 29 images. However, I did not find the test set you used in your paper which contains another 100 images. I would highly appreciate it if you could share the test dataset in your paper.

    bug 
    opened by pgy1go 0
  • License Restrictions on dataset

    License Restrictions on dataset

    Hi, please share the dataset license restrictions and image copyright mentions. I would like to use your dataset for a course/book am writing on deep learning.

    Thanks.

    question 
    opened by abhi-kumar 1
Releases(v0.1.0)
  • v0.1.0(Dec 14, 2021)

    CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.

    Some features of this dataset are:

    • high quality images and annotations (~4.6 bounding boxes per image)
    • real-life images unlike any current such dataset
    • majority of non-iconic images (allowing easy deployment to real-world environments)
    • >15 pre-trained models in the model zoo availaible to directly use (also for mobile and edge devices)

    The Python package allows to:

    • download data easily
    • download TF records
    • loading dataset in PyTorch and TensorFlow
    Source code(tar.gz)
    Source code(zip)
Owner
Rishit Dagli
High School,TEDx,2xTED-Ed speaker | International Speaker | Microsoft Student Ambassador | Mentor, @TFUGMumbai | Organize @KotlinMumbai
Rishit Dagli
Predicting Price of house by considering ,house age, Distance from public transport

House-Price-Prediction Predicting Price of house by considering ,house age, Distance from public transport, No of convenient stores around house etc..

Musab Jaleel 1 Jan 08, 2022
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

Rishit Dagli 84 Oct 15, 2022
Syllabus del curso IIC2115 - Programación como Herramienta para la Ingeniería 2022/I

IIC2115 - Programación como Herramienta para la Ingeniería Videos y tutoriales Tutorial CMD Tutorial Instalación Python y Jupyter Tutorial de git-GitH

21 Nov 09, 2022
Sound Event Detection with FilterAugment

Sound Event Detection with FilterAugment Official implementation of Heavily Augmented Sound Event Detection utilizing Weak Predictions (DCASE2021 Chal

43 Aug 28, 2022
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 02, 2023
An example to implement a new backbone with OpenMMLab framework.

Backbone example on OpenMMLab framework English | 简体中文 Introduction This is an template repo about how to use OpenMMLab framework to develop a new bac

Ma Zerun 22 Dec 29, 2022
Show Me the Whole World: Towards Entire Item Space Exploration for Interactive Personalized Recommendations

HierarchicyBandit Introduction This is the implementation of WSDM 2022 paper : Show Me the Whole World: Towards Entire Item Space Exploration for Inte

yu song 5 Sep 09, 2022
Demo for the paper "Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation"

Streaming speaker diarization Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation by Juan Manuel Coria, Hervé

Juanma Coria 187 Jan 06, 2023
ICNet for Real-Time Semantic Segmentation on High-Resolution Images, ECCV2018

ICNet for Real-Time Semantic Segmentation on High-Resolution Images by Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, Jiaya Jia, details a

Hengshuang Zhao 594 Dec 31, 2022
Jax/Flax implementation of Variational-DiffWave.

jax-variational-diffwave Jax/Flax implementation of Variational-DiffWave. (Zhifeng Kong et al., 2020, Diederik P. Kingma et al., 2021.) DiffWave with

YoungJoong Kim 37 Dec 16, 2022
Quick program made to generate alpha and delta tables for Hidden Markov Models

HMM_Calc Functions for generating Alpha and Delta tables from a Hidden Markov Model. Parameters: a: Matrix of transition probabilities. a[i][j] = a_{i

Adem Odza 1 Dec 04, 2021
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
Python SDK for building, training, and deploying ML models

Overview of Kubeflow Fairing Kubeflow Fairing is a Python package that streamlines the process of building, training, and deploying machine learning (

Kubeflow 325 Dec 13, 2022
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
A series of Python scripts to access measurements from Fluke 28X meters. Fluke IR Remote Interface required.

Fluke289_data_access A series of Python scripts to access measurements from Fluke 28X meters. Fluke IR Remote Interface required. Created from informa

3 Dec 08, 2022
Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
PyTorch implementations of the paper: "Learning Independent Instance Maps for Crowd Localization"

IIM - Crowd Localization This repo is the official implementation of paper: Learning Independent Instance Maps for Crowd Localization. The code is dev

tao han 91 Nov 10, 2022
基于tensorflow 2.x的图片识别工具集

Classification.tf2 基于tensorflow 2.x的图片识别工具集 功能 粗粒度场景图片分类 细粒度场景图片分类 其他场景图片分类 模型部署 tensorflow serving本地推理和docker部署 tensorRT onnx ... 数据集 https://hyper.a

Wei Qi 1 Nov 03, 2021
Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input. To this end, we introdu

OATML 360 Dec 28, 2022
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022