Meta Learning for Semi-Supervised Few-Shot Classification

Overview

few-shot-ssl-public

Code for paper Meta-Learning for Semi-Supervised Few-Shot Classification. [arxiv]

Dependencies

  • cv2
  • numpy
  • pandas
  • python 2.7 / 3.5+
  • tensorflow 1.3+
  • tqdm

Our code is tested on Ubuntu 14.04 and 16.04.

Setup

First, designate a folder to be your data root:

export DATA_ROOT={DATA_ROOT}

Then, set up the datasets following the instructions in the subsections.

Omniglot

[Google Drive] (9.3 MB)

# Download and place "omniglot.tar.gz" in "$DATA_ROOT/omniglot".
mkdir -p $DATA_ROOT/omniglot
cd $DATA_ROOT/omniglot
mv ~/Downloads/omniglot.tar.gz .
tar -xzvf omniglot.tar.gz
rm -f omniglot.tar.gz

miniImageNet

[Google Drive] (1.1 GB)

Update: Python 2 and 3 compatible version: [train] [val] [test]

# Download and place "mini-imagenet.tar.gz" in "$DATA_ROOT/mini-imagenet".
mkdir -p $DATA_ROOT/mini-imagenet
cd $DATA_ROOT/mini-imagenet
mv ~/Downloads/mini-imagenet.tar.gz .
tar -xzvf mini-imagenet.tar.gz
rm -f mini-imagenet.tar.gz

tieredImageNet

[Google Drive] (12.9 GB)

# Download and place "tiered-imagenet.tar" in "$DATA_ROOT/tiered-imagenet".
mkdir -p $DATA_ROOT/tiered-imagenet
cd $DATA_ROOT/tiered-imagenet
mv ~/Downloads/tiered-imagenet.tar .
tar -xvf tiered-imagenet.tar
rm -f tiered-imagenet.tar

Note: Please make sure that the following hardware requirements are met before running tieredImageNet experiments.

  • Disk: 30 GB
  • RAM: 32 GB

Core Experiments

Please run the following scripts to reproduce the core experiments.

# Clone the repository.
git clone https://github.com/renmengye/few-shot-ssl-public.git
cd few-shot-ssl-public

# To train a model.
python run_exp.py --data_root $DATA_ROOT             \
                  --dataset {DATASET}                \
                  --label_ratio {LABEL_RATIO}        \
                  --model {MODEL}                    \
                  --results {SAVE_CKPT_FOLDER}       \
                  [--disable_distractor]

# To test a model.
python run_exp.py --data_root $DATA_ROOT             \
                  --dataset {DATASET}                \
                  --label_ratio {LABEL_RATIO}        \
                  --model {MODEL}                    \
                  --results {SAVE_CKPT_FOLDER}       \
                  --eval --pretrain {MODEL_ID}       \
                  [--num_unlabel {NUM_UNLABEL}]      \
                  [--num_test {NUM_TEST}]            \
                  [--disable_distractor]             \
                  [--use_test]
  • Possible {MODEL} options are basic, kmeans-refine, kmeans-refine-radius, and kmeans-refine-mask.
  • Possible {DATASET} options are omniglot, mini-imagenet, tiered-imagenet.
  • Use {LABEL_RATIO} 0.1 for omniglot and tiered-imagenet, and 0.4 for mini-imagenet.
  • Replace {MODEL_ID} with the model ID obtained from the training program.
  • Replace {SAVE_CKPT_FOLDER} with the folder where you save your checkpoints.
  • Add additional flags --num_unlabel 20 --num_test 20 for testing mini-imagenet and tiered-imagenet models, so that each episode contains 20 unlabeled images per class and 20 query images per class.
  • Add an additional flag --disable_distractor to remove all distractor classes in the unlabeled images.
  • Add an additional flag --use_test to evaluate on the test set instead of the validation set.
  • More commandline details see run_exp.py.

Simple Baselines for Few-Shot Classification

Please run the following script to reproduce a suite of baseline results.

python run_baseline_exp.py --data_root $DATA_ROOT    \
                           --dataset {DATASET}
  • Possible DATASET options are omniglot, mini-imagenet, tiered-imagenet.

Run over Multiple Random Splits

Please run the following script to reproduce results over 10 random label/unlabel splits, and test the model with different number of unlabeled items per episode. The default seeds are 0, 1001, ..., 9009.

python run_multi_exp.py --data_root $DATA_ROOT       \
                        --dataset {DATASET}          \
                        --label_ratio {LABEL_RATIO}  \
                        --model {MODEL}              \
                        [--disable_distractor]       \
                        [--use_test]
  • Possible MODEL options are basic, kmeans-refine, kmeans-refine-radius, and kmeans-refine-mask.
  • Possible DATASET options are omniglot, mini_imagenet, tiered_imagenet.
  • Use {LABEL_RATIO} 0.1 for omniglot and tiered-imagenet, and 0.4 for mini-imagenet.
  • Add an additional flag --disable_distractor to remove all distractor classes in the unlabeled images.
  • Add an additional flag --use_test to evaluate on the test set instead of the validation set.

Citation

If you use our code, please consider cite the following:

  • Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle and Richard S. Zemel. Meta-Learning for Semi-Supervised Few-Shot Classification. In Proceedings of 6th International Conference on Learning Representations (ICLR), 2018.
@inproceedings{ren18fewshotssl,
  author   = {Mengye Ren and 
              Eleni Triantafillou and 
              Sachin Ravi and 
              Jake Snell and 
              Kevin Swersky and 
              Joshua B. Tenenbaum and 
              Hugo Larochelle and 
              Richard S. Zemel},
  title    = {Meta-Learning for Semi-Supervised Few-Shot Classification},
  booktitle= {Proceedings of 6th International Conference on Learning Representations {ICLR}},
  year     = {2018},
}
Owner
Mengye Ren
Mengye Ren
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
Medical Insurance Cost Prediction using Machine earning

Medical-Insurance-Cost-Prediction-using-Machine-learning - Here in this project, I will use regression analysis to predict medical insurance cost for people in different regions, and based on several

1 Dec 27, 2021
Code and data for the paper "Hearing What You Cannot See"

Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners Public repository of the paper "Hearing What You Cannot See: Acoustic Vehicle D

TU Delft Intelligent Vehicles 26 Jul 13, 2022
Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence

Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence, etc. This article aims to provide an introduction on how to make use of the S

RISHABH MISHRA 1 Feb 13, 2022
[NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts

Unsupervised Learning of Compositional Energy Concepts This is the pytorch code for the paper Unsupervised Learning of Compositional Energy Concepts.

45 Nov 30, 2022
Robot Servers and Server Manager software for robo-gym

robo-gym-server-modules Robot Servers and Server Manager software for robo-gym. For info on how to use this package please visit the robo-gym website

JR ROBOTICS 4 Aug 16, 2021
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
Code from the paper "High-Performance Brain-to-Text Communication via Handwriting"

High-Performance Brain-to-Text Communication via Handwriting Overview This repo is associated with this manuscript, preprint and dataset. The code can

Francis R. Willett 306 Jan 03, 2023
[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution

TTSR Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020. Contents Intr

Multimedia Research 689 Dec 28, 2022
Deep Markov Factor Analysis (NeurIPS2021)

Deep Markov Factor Analysis (DMFA) Codes and experiments for deep Markov factor analysis (DMFA) model accepted for publication at NeurIPS2021: A. Farn

Sarah Ostadabbas 2 Dec 16, 2022
Toward Multimodal Image-to-Image Translation

BicycleGAN Project Page | Paper | Video Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our

Jun-Yan Zhu 1.4k Dec 22, 2022
The code from the paper Character Transformations for Non-Autoregressive GEC Tagging

Character Transformations for Non-Autoregressive GEC Tagging Milan Straka, Jakub Náplava, Jana Straková Charles University Faculty of Mathematics and

ÚFAL 5 Dec 10, 2022
Code for reproducing our paper: LMSOC: An Approach for Socially Sensitive Pretraining

LMSOC: An Approach for Socially Sensitive Pretraining Code for reproducing the paper LMSOC: An Approach for Socially Sensitive Pretraining to appear a

Twitter Research 11 Dec 20, 2022
Time should be taken seer-iously

TimeSeers seers - (Noun) plural form of seer - A person who foretells future events by or as if by supernatural means TimeSeers is an hierarchical Bay

279 Dec 26, 2022
Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation

Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation By Qiang Zhou*, Zilong Huang*, Lichao Huang, Han Shen, Yon

Forest 117 Apr 01, 2022
The official implementation of ELSA: Enhanced Local Self-Attention for Vision Transformer

ELSA: Enhanced Local Self-Attention for Vision Transformer By Jingkai Zhou, Pich

DamoCV 87 Dec 19, 2022
CR-Fill: Generative Image Inpainting with Auxiliary Contextual Reconstruction. ICCV 2021

crfill Usage | Web App | | Paper | Supplementary Material | More results | code for paper ``CR-Fill: Generative Image Inpainting with Auxiliary Contex

182 Dec 20, 2022
A web porting for NVlabs' StyleGAN2, to facilitate exploring all kinds characteristic of StyleGAN networks

This project is a web porting for NVlabs' StyleGAN2, to facilitate exploring all kinds characteristic of StyleGAN networks. Thanks for NVlabs' excelle

K.L. 150 Dec 15, 2022
Direct Multi-view Multi-person 3D Human Pose Estimation

Implementation of NeurIPS-2021 paper: Direct Multi-view Multi-person 3D Human Pose Estimation [paper] [video-YouTube, video-Bilibili] [slides] This is

Sea AI Lab 251 Dec 30, 2022
To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types

To Design and Implement Logistic Regression to Classify Between Benign and Malignant Cancer Types, from a Database Taken From Dr. Wolberg reports his Clinic Cases.

Astitva Veer Garg 1 Jul 31, 2022