A MNIST-like fashion product database. Benchmark

Overview

Fashion-MNIST

GitHub stars Gitter Readme-CN Readme-JA License: MIT Year-In-Review

Table of Contents

Fashion-MNIST is a dataset of Zalando's article imagesโ€”consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.

Here's an example of how the data looks (each class takes three-rows):

Why we made Fashion-MNIST

The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."

To Serious Machine Learning Researchers

Seriously, we are talking about replacing MNIST. Here are some good reasons:

Get the Data

Many ML libraries already include Fashion-MNIST data/API, give it a try!

You can use direct links to download the dataset. The data is stored in the same format as the original MNIST data.

Name Content Examples Size Link MD5 Checksum
train-images-idx3-ubyte.gz training set images 60,000 26 MBytes Download 8d4fb7e6c68d591d4c3dfef9ec88bf0d
train-labels-idx1-ubyte.gz training set labels 60,000 29 KBytes Download 25c81989df183df01b3e8a0aad5dffbe
t10k-images-idx3-ubyte.gz test set images 10,000 4.3 MBytes Download bef4ecab320f06d8554ea6380940ec79
t10k-labels-idx1-ubyte.gz test set labels 10,000 5.1 KBytes Download bb300cfdad3c16e7a12a480ee83cd310

Alternatively, you can clone this GitHub repository; the dataset appears under data/fashion. This repo also contains some scripts for benchmark and visualization.

git clone [email protected]:zalandoresearch/fashion-mnist.git

Labels

Each training and test example is assigned to one of the following labels:

Label Description
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot

Usage

Loading data with Python (requires NumPy)

Use utils/mnist_reader in this repo:

import mnist_reader
X_train, y_train = mnist_reader.load_mnist('data/fashion', kind='train')
X_test, y_test = mnist_reader.load_mnist('data/fashion', kind='t10k')

Loading data with Tensorflow

Make sure you have downloaded the data and placed it in data/fashion. Otherwise, Tensorflow will download and use the original MNIST.

from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/fashion')

data.train.next_batch(BATCH_SIZE)

Note, Tensorflow supports passing in a source url to the read_data_sets. You may use:

data = input_data.read_data_sets('data/fashion', source_url='http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/')

Also, an official Tensorflow tutorial of using tf.keras, a high-level API to train Fashion-MNIST can be found here.

Loading data with other machine learning libraries

To date, the following libraries have included Fashion-MNIST as a built-in dataset. Therefore, you don't need to download Fashion-MNIST by yourself. Just follow their API and you are ready to go.

You are welcome to make pull requests to other open-source machine learning packages, improving their support to Fashion-MNIST dataset.

Loading data with other languages

As one of the Machine Learning community's most popular datasets, MNIST has inspired people to implement loaders in many different languages. You can use these loaders with the Fashion-MNIST dataset as well. (Note: may require decompressing first.) To date, we haven't yet tested all of these loaders with Fashion-MNIST.

Benchmark

We built an automatic benchmarking system based on scikit-learn that covers 129 classifiers (but no deep learning) with different parameters. Find the results here.

You can reproduce the results by running benchmark/runner.py. We recommend building and deploying this Dockerfile.

You are welcome to submit your benchmark; simply create a new issue and we'll list your results here. Before doing that, please make sure it does not already appear in this list. Visit our contributor guidelines for additional details.

The table below collects the submitted benchmarks. Note that we haven't yet tested these results. You are welcome to validate the results using the code provided by the submitter. Test accuracy may differ due to the number of epoch, batch size, etc. To correct this table, please create a new issue.

Classifier Preprocessing Fashion test accuracy MNIST test accuracy Submitter Code
2 Conv+pooling None 0.876 - Kashif Rasul ๐Ÿ”—
2 Conv+pooling None 0.916 - Tensorflow's doc ๐Ÿ”—
2 Conv+pooling+ELU activation (PyTorch) None 0.903 - @AbhirajHinge ๐Ÿ”—
2 Conv Normalization, random horizontal flip, random vertical flip, random translation, random rotation. 0.919 0.971 Kyriakos Efthymiadis ๐Ÿ”—
2 Conv <100K parameters None 0.925 0.992 @hardmaru ๐Ÿ”—
2 Conv ~113K parameters Normalization 0.922 0.993 Abel G. ๐Ÿ”—
2 Conv+3 FC ~1.8M parameters Normalization 0.932 0.994 @Xfan1025 ๐Ÿ”—
2 Conv+3 FC ~500K parameters Augmentation, batch normalization 0.934 0.994 @cmasch ๐Ÿ”—
2 Conv+pooling+BN None 0.934 - @khanguyen1207 ๐Ÿ”—
2 Conv+2 FC Random Horizontal Flips 0.939 - @ashmeet13 ๐Ÿ”—
3 Conv+2 FC None 0.907 - @Cenk BircanoฤŸlu ๐Ÿ”—
3 Conv+pooling+BN None 0.903 0.994 @meghanabhange ๐Ÿ”—
3 Conv+pooling+2 FC+dropout None 0.926 - @Umberto Griffo ๐Ÿ”—
3 Conv+BN+pooling None 0.921 0.992 @gchhablani ๐Ÿ”—
5 Conv+BN+pooling None 0.931 - @Noumanmufc1 ๐Ÿ”—
CNN with optional shortcuts, dense-like connectivity standardization+augmentation+random erasing 0.947 - @kennivich ๐Ÿ”—
GRU+SVM None 0.888 0.965 @AFAgarap ๐Ÿ”—
GRU+SVM with dropout None 0.897 0.988 @AFAgarap ๐Ÿ”—
WRN40-4 8.9M params standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.967 - @ajbrock ๐Ÿ”— ๐Ÿ”—
DenseNet-BC 768K params standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.954 - @ajbrock ๐Ÿ”— ๐Ÿ”—
MobileNet augmentation (horizontal flips) 0.950 - @่‹ๅ‰‘ๆž— ๐Ÿ”—
ResNet18 Normalization, random horizontal flip, random vertical flip, random translation, random rotation. 0.949 0.979 Kyriakos Efthymiadis ๐Ÿ”—
GoogleNet with cross-entropy loss None 0.937 - @Cenk BircanoฤŸlu ๐Ÿ”—
AlexNet with Triplet loss None 0.899 - @Cenk BircanoฤŸlu ๐Ÿ”—
SqueezeNet with cyclical learning rate 200 epochs None 0.900 - @snakers4 ๐Ÿ”—
Dual path network with wide resnet 28-10 standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.957 - @Queequeg ๐Ÿ”—
MLP 256-128-100 None 0.8833 - @heitorrapela ๐Ÿ”—
VGG16 26M parameters None 0.935 - @QuantumLiu ๐Ÿ”— ๐Ÿ”—
WRN-28-10 standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.959 - @zhunzhong07 ๐Ÿ”—
WRN-28-10 + Random Erasing standard preprocessing (mean/std subtraction/division) and augmentation (random crops/horizontal flips) 0.963 - @zhunzhong07 ๐Ÿ”—
Human Performance Crowd-sourced evaluation of human (with no fashion expertise) performance. 1000 randomly sampled test images, 3 labels per image, majority labelling. 0.835 - Leo -
Capsule Network 8M parameters Normalization and shift at most 2 pixel and horizontal flip 0.936 - @XifengGuo ๐Ÿ”—
HOG+SVM HOG 0.926 - @subalde ๐Ÿ”—
XgBoost scaling the pixel values to mean=0.0 and var=1.0 0.898 0.958 @anktplwl91 ๐Ÿ”—
DENSER - 0.953 0.997 @fillassuncao ๐Ÿ”— ๐Ÿ”—
Dyra-Net Rescale to unit interval 0.906 - @Dirk Schรคfer ๐Ÿ”— ๐Ÿ”—
Google AutoML 24 compute hours (higher quality) 0.939 - @Sebastian Heinz ๐Ÿ”—
Fastai Resnet50+Fine-tuning+Softmax on last layer's activations 0.9312 - @Sayak ๐Ÿ”—

Other Explorations of Fashion-MNIST

Fashion-MNIST: Year in Review

Fashion-MNIST on Google Scholar

Generative adversarial networks (GANs)

Clustering

Video Tutorial

Machine Learning Meets Fashion by Yufeng G @ Google Cloud

Machine Learning Meets Fashion

Introduction to Kaggle Kernels by Yufeng G @ Google Cloud

Introduction to Kaggle Kernels

ๅŠจๆ‰‹ๅญฆๆทฑๅบฆๅญฆไน  by Mu Li @ Amazon AI

MXNet/Gluonไธญๆ–‡้ข‘้“

Apache MXNet์œผ๋กœ ๋ฐฐ์›Œ๋ณด๋Š” ๋”ฅ๋Ÿฌ๋‹(Deep Learning) - ๊น€๋ฌดํ˜„ (AWS ์†”๋ฃจ์…˜์ฆˆ์•„ํ‚คํ…ํŠธ)

Apache MXNet์œผ๋กœ ๋ฐฐ์›Œ๋ณด๋Š” ๋”ฅ๋Ÿฌ๋‹(Deep Learning)

Visualization

t-SNE on Fashion-MNIST (left) and original MNIST (right)

PCA on Fashion-MNIST (left) and original MNIST (right)

UMAP on Fashion-MNIST (left) and original MNIST (right)

PyMDE on Fashion-MNIST (left) and original MNIST (right)

Contributing

Thanks for your interest in contributing! There are many ways to get involved; start with our contributor guidelines and then check these open issues for specific tasks.

Contact

To discuss the dataset, please use Gitter.

Citing Fashion-MNIST

If you use Fashion-MNIST in a scientific publication, we would appreciate references to the following paper:

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf. arXiv:1708.07747

Biblatex entry:

@online{xiao2017/online,
  author       = {Han Xiao and Kashif Rasul and Roland Vollgraf},
  title        = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms},
  date         = {2017-08-28},
  year         = {2017},
  eprintclass  = {cs.LG},
  eprinttype   = {arXiv},
  eprint       = {cs.LG/1708.07747},
}

Who is citing Fashion-MNIST?

License

The MIT License (MIT) Copyright ยฉ [2017] Zalando SE, https://tech.zalando.com

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the โ€œSoftwareโ€), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED โ€œAS ISโ€, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Owner
Zalando Research
Repositories of the research branch of Zalando SE
Zalando Research
Pre-Trained Image Processing Transformer (IPT)

Pre-Trained Image Processing Transformer (IPT) By Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Cha

HUAWEI Noah's Ark Lab 332 Dec 18, 2022
Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020)

Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020) Official implementation of: Forest R-CNN: Large-Vo

Jialian Wu 54 Jan 06, 2023
OrienMask: Real-time Instance Segmentation with Discriminative Orientation Maps

OrienMask This repository implements the framework OrienMask for real-time instance segmentation. It achieves 34.8 mask AP on COCO test-dev at the spe

45 Dec 13, 2022
Oriented Response Networks, in CVPR 2017

Oriented Response Networks [Home] [Project] [Paper] [Supp] [Poster] Torch Implementation The torch branch contains: the official torch implementation

ZhouYanzhao 217 Dec 12, 2022
Simple Tensorflow implementation of "Adaptive Convolutions for Structure-Aware Style Transfer" (CVPR 2021)

AdaConv โ€” Simple TensorFlow Implementation [Paper] : Adaptive Convolutions for Structure-Aware Style Transfer (CVPR 2021) Note This repository does no

Junho Kim 26 Nov 18, 2022
[CVPR 2022 Oral] Balanced MSE for Imbalanced Visual Regression https://arxiv.org/abs/2203.16427

Balanced MSE Code for the paper: Balanced MSE for Imbalanced Visual Regression Jiawei Ren, Mingyuan Zhang, Cunjun Yu, Ziwei Liu CVPR 2022 (Oral) News

Jiawei Ren 267 Jan 01, 2023
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.

CAMS: Color-Aware Multi-Style Transfer Mahmoud Afifi1, Abdullah Abuolaim*1, Mostafa Hussien*2, Marcus A. Brubaker1, Michael S. Brown1 1York University

Mahmoud Afifi 36 Dec 04, 2022
MMRazor: a model compression toolkit for model slimming and AutoML

Documentation: https://mmrazor.readthedocs.io/ English | ็ฎ€ไฝ“ไธญๆ–‡ Introduction MMRazor is a model compression toolkit for model slimming and AutoML, which

OpenMMLab 899 Jan 02, 2023
Few-shot NLP benchmark for unified, rigorous eval

FLEX FLEX is a benchmark and framework for unified, rigorous few-shot NLP evaluation. FLEX enables: First-class NLP support Support for meta-training

AI2 85 Dec 03, 2022
Offcial implementation of "A Hybrid Video Anomaly Detection Framework via Memory-Augmented Flow Reconstruction and Flow-Guided Frame Prediction, ICCV-2021".

HF2-VAD Offcial implementation of "A Hybrid Video Anomaly Detection Framework via Memory-Augmented Flow Reconstruction and Flow-Guided Frame Predictio

76 Dec 21, 2022
a curated list of docker-compose files prepared for testing data engineering tools, databases and open source libraries.

data-services A repository for storing various Data Engineering docker-compose files in one place. How to use it ? Set the required settings in .env f

BigData.IR 525 Dec 03, 2022
Official PyTorch implementation of DD3D: Is Pseudo-Lidar needed for Monocular 3D Object detection? (ICCV 2021), Dennis Park*, Rares Ambrus*, Vitor Guizilini, Jie Li, and Adrien Gaidon.

DD3D: "Is Pseudo-Lidar needed for Monocular 3D Object detection?" Install // Datasets // Experiments // Models // License // Reference Full video Offi

Toyota Research Institute - Machine Learning 364 Dec 27, 2022
PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations.

HPNet This repository contains the PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations. Installation The

Siming Yan 42 Dec 07, 2022
Detector for Log4Shell exploitation attempts

log4shell-detector Detector for Log4Shell exploitation attempts Idea The problem with the log4j CVE-2021-44228 exploitation is that the string can be

Florian Roth 729 Dec 25, 2022
Deep Video Matting via Spatio-Temporal Alignment and Aggregation [CVPR2021]

Deep Video Matting via Spatio-Temporal Alignment and Aggregation [CVPR2021] Paper: https://arxiv.org/abs/2104.11208 Introduction Despite the significa

76 Dec 07, 2022
Code for "LoRA: Low-Rank Adaptation of Large Language Models"

LoRA: Low-Rank Adaptation of Large Language Models This repo contains the implementation of LoRA in GPT-2 and steps to replicate the results in our re

Microsoft 394 Jan 08, 2023
A program to recognize fruits on pictures or videos using yolov5

Yolov5 Fruits Detector Requirements Either Linux or Windows. We recommend Linux for better performance. Python 3.6+ and PyTorch 1.7+. Installation To

Fateme Zamanian 30 Jan 06, 2023
This is Official implementation for "Pose-guided Feature Disentangling for Occluded Person Re-Identification Based on Transformer" in AAAI2022

PFD๏ผšPose-guided Feature Disentangling for Occluded Person Re-identification based on Transformer This repo is the official implementation of "Pose-gui

Tao Wang 93 Dec 18, 2022
KoCLIP: Korean port of OpenAI CLIP, in Flax

KoCLIP This repository contains code for KoCLIP, a Korean port of OpenAI's CLIP. This project was conducted as part of Hugging Face's Flax/JAX communi

Jake Tae 100 Jan 02, 2023
Boston House Prediction Valuation Tool

Boston-House-Prediction-Valuation-Tool From Below Anlaysis The Valuation Tool is Designed Correlation Matrix Regrssion Analysis Between Target Vs Pred

0 Sep 09, 2022