Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Overview

Easy Few-Shot Learning

Python Versions CircleCI Code style: black License: MIT Open In Colab

Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you if:

  • you're new to few-shot learning and want to learn;
  • or you're looking for reliable, clear and easily usable code that you can use for your projects.

Don't get lost in large repositories with hundreds of methods and no explanation on how to use them. Here, we want each line of code to be covered by a tutorial.

What's in there?

Notebooks: learn and practice

You want to learn few-shot learning and don't know where to start? Start with our tutorial.

Code that you can use and understand

Models:

Tools for data loading:

  • EasySet: a ready-to-use Dataset object to handle datasets of images with a class-wise directory split
  • TaskSampler: samples batches in the shape of few-shot classification tasks

Datasets to test your model

QuickStart

  1. Install the package with pip:

pip install git+https://github.com/sicara/easy-few-shot-learning.git

Note: alternatively, you can clone the repository so that you can modify the code as you wish.

  1. Download CU-Birds and the few-shot train/val/test split:
mkdir -p data/CUB && cd data/CUB
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1GDr1OkoXdhaXWGA8S3MAq3a522Tak-nx' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1GDr1OkoXdhaXWGA8S3MAq3a522Tak-nx" -O images.tgz
rm -rf /tmp/cookies.txt
tar  --exclude='._*' -zxvf images.tgz
wget https://raw.githubusercontent.com/sicara/easy-few-shot-learning/master/data/CUB/train.json
wget https://raw.githubusercontent.com/sicara/easy-few-shot-learning/master/data/CUB/val.json
wget https://raw.githubusercontent.com/sicara/easy-few-shot-learning/master/data/CUB/test.json
cd ...
  1. Check that you have a 680,9MB images folder in ./data/CUB along with three JSON files.

  2. From the training subset of CUB, create a dataloader that yields few-shot classification tasks:

from easyfsl.data_tools import EasySet, TaskSampler
from torch.utils.data import DataLoader

train_set = EasySet(specs_file="./data/CUB/train.json", training=True)
train_sampler = TaskSampler(
    train_set, n_way=5, n_shot=5, n_query=10, n_tasks=40000
)
train_loader = DataLoader(
    train_set,
    batch_sampler=train_sampler,
    num_workers=12,
    pin_memory=True,
    collate_fn=train_sampler.episodic_collate_fn,
)
  1. Create and train a model
from easyfsl.methods import PrototypicalNetworks
from torch import nn
from torch.optim import Adam
from torchvision.models import resnet18

convolutional_network = resnet18(pretrained=False)
convolutional_network.fc = nn.Flatten()
model = PrototypicalNetworks(convolutional_network).cuda()

optimizer = Adam(params=model.parameters())

model.fit(train_loader, optimizer)

Troubleshooting: a ResNet18 with a batch size of (5 * (5+10)) = 75 whould use about 4.2GB on your GPU. If you don't have it, switch to CPU, choose a smaller model or reduce the batch size (in TaskSampler above).

  1. Evaluate your model on the test set
test_set = EasySet(specs_file="./data/CUB/test.json", training=False)
test_sampler = TaskSampler(
    test_set, n_way=5, n_shot=5, n_query=10, n_tasks=100
)
test_loader = DataLoader(
    test_set,
    batch_sampler=test_sampler,
    num_workers=12,
    pin_memory=True,
    collate_fn=test_sampler.episodic_collate_fn,
)

model.evaluate(test_loader)

Roadmap

  • Implement unit tests
  • Add validation to AbstractMetaLearner.fit()
  • Integrate more methods:
    • Matching Networks
    • Relation Networks
    • MAML
    • Transductive Propagation Network
  • Integrate non-episodic training
  • Integrate more benchmarks:
    • miniImageNet
    • tieredImageNet
    • Meta-Dataset

Contribute

This project is very open to contributions! You can help in various ways:

  • raise issues
  • resolve issues already opened
  • tackle new features from the roadmap
  • fix typos, improve code quality
Comments
  • Traning with custom dataset

    Traning with custom dataset

    Hi, thanks for your code, it helps me a lot. But it also got some problems for a newbie like me. Although I make the code run successfully now, I also make a lot of compromises to some errors. I combined the code from classical_training.ipynb and my_first_few_shot_classifier.ipynb.

    I post all my code step by step and point out the problems I met. I am running Windows10. The environment is created by Anaconda. Cuda10.2, Cudnn 7.0, PyTorch 1.10.1

    At last, great thanks for your code again. Let's discuss this together.

    question 
    opened by gushengzhao1996 19
  • custom Data

    custom Data

    I want to train this model on custom data but I did not understand the split for CUB and I could not even get documentation on EasySet ? do you know where it is ? i just have 2 classes in my data btw

    question 
    opened by Kunika05 6
  • classical training method evaluation concept

    classical training method evaluation concept

    Hello, i'm new to few shot learning and want to make sure about classical training. When the backbone, after training, is evaluated with a picked method on new set of data, does the method get adjusted or learn from the new data?

    question 
    opened by joshuasir 4
  • How to build my own train_set use own data

    How to build my own train_set use own data

    Problem Thanks for your sharing about FSL, there is one problem: When I finished the tutorial 'Discovering Prototypical Networks' , I want to use my own photo data to build test_set, how can I do that and How should I construct my data's structure

    enhancement question 
    opened by cy2333ytu 4
  • Adding a utility predictor to an image

    Adding a utility predictor to an image

    The intention of adding this predictor is to help those who need to use the trained network for an image, obtaining as a return the inferred class and the tensor with the mean of Euclidean distances. Tests were performed with the PrototypicalNetworks and MatchingNetworks for a 5-way 6-shot dataset.

    enhancement 
    opened by diego91964 4
  • Finetune:

    Finetune: "does not require grad and does not have a grad_fn"

    Problem I am trying to train backbone using classical training, and use Finetune in methods to fine-tune the model by episodic_training.ipynb. How should I implement it? I see that the episodic_training.ipynb you wrote has fixed the parameters for backbone, but when I import the pre-trained model for fintune, it does not work properly. Another question is, how much is n_validation_tasks generally set to? Is there a standard? Because the setting of this hyperparameter will affect the result. I look forward to your answer.

    convolutional_network = resnet50(num_classes=2).to(DEVICE) convolutional_network.load_state_dict(torch.load('save_model/resnet50.pt')) few_shot_classifier = Finetune(convolutional_network).to(DEVICE)

    question 
    opened by Jackieam 4
  • N_QUERY

    N_QUERY

    1. The number of query set of each class has to be equal? Can i use random number of images of each class.
    2. Training epoch by epoch still can be used in few-shot learning? 306034091_661392165355129_2531247296356843848_n
    question 
    opened by earthlovebpt 3
  • PicklingError: Can't pickle <function <lambda> at 0x000001AEE1AC88B0>: attribute lookup <lambda> on __main__ failed

    PicklingError: Can't pickle at 0x000001AEE1AC88B0>: attribute lookup on __main__ failed

    Problem (i am greenhand in this field, so i may ask a simple and silly question. Sorry for that) when i run the front part of "my_first_few_shot_classifier.ipynb": image The following error messages appear: image

    How can we help How to solve this problem? I haven't found a practical solution

    question 
    opened by Meoooww 3
  • how to view the results after training and getting accuracy ?

    how to view the results after training and getting accuracy ?

    So i have trained your episodic training notebook on custom data. but I had a question about how would we view the output I got the accuracy also but how would we view the classification

    question 
    opened by Kunika05 3
  • Question on meta-training in the tutorial notebook

    Question on meta-training in the tutorial notebook

    Hi, Thanks for making such a simple and beautiful library for Few-Shot Learning. I have a query when we run a particular cell from your notebook for training meta-learning model, does it also train the ResNet18 Model on the given Dataset for generating a better representation of Feature Images like we do it while we do transfer learning when we train classifier model on our custom dataset using Imagenet pre-trained parameters or Does it only trains Prototype network?

    Please, clarify this doubt. Thanks again.

    question 
    opened by karndeepsingh 3
  • How to train on custom data

    How to train on custom data

    Hi Thank you for your great work and sharing it with everyone.

    I want to implement few shot learning for a task that I have. where I have collected few samples(10) each for both positive and negative class. How do I train the model on these novel classes using my custom dataset.

    Thank you for your help

    question 
    opened by chetanmr 3
  • Can I use different backbone for classical or episodic learning ?

    Can I use different backbone for classical or episodic learning ?

    Hi,

    I am using your classical and episodic Training notebooks, they are very helpful for my project , although I want to try different backbones like EfficientNet. I am new in this so do you have any idea if I can use different backbone than ResNet if yes what changes I will have to consider in the code?

    Thanks in adavance

    question 
    opened by shraddha291996 0
  • How to get a prediction for custom dataset ?

    How to get a prediction for custom dataset ?

    Hello, Thank you very much for your amazing work, its very helpful.

    I have one question on getting prediction on custom dataset, so basically I am using Easyset for my custom dataset and classical training notebook and I want to see prediction/classification also , for example from which class my test image belongs to. I hope my quetsion is clear to you. Thanks in advance

    question 
    opened by shraddha291996 1
  • Probabilities of a novel image belonging to a Class

    Probabilities of a novel image belonging to a Class

        I created an example that might help you.
    
    
    import torchvision.transforms as tt
    import torch
    from torchvision.datasets import ImageFolder
    from easyfsl.methods import FewShotClassifier
    from torch.utils.data import DataLoader
    
    class FewShotPredictor :
        """
    
            This class aims to implement a predictor for a Few-shot classifier.
    
            The few shot classifiers need a support set that will be used for calculating the distance between the support set and the query image.
    
            To load the support we have used an ImageFolder Dataset, which needs to have the following structure:
    
            folder:
              |_ class_name_folder_1:
                     |_ image_1
                     |_  …
                     |_ image_n
              |_ class_name_folder_2:
                     |_ image_1
                     |_  …
                     |_ image_n
    
            The folder must contain the same number of images per class, being the total images (n_way * n_shot).
    
            There must be n_way folders with n_shot images per folder.
    
        """
    
        def __init__(self ,
                     classifier: FewShotClassifier,
                     device,
                     path_to_support_images,
                     n_way,
                     n_shot,
                     input_size=224):
    
            """
                :param classifier: created and loaded model
                :param device: device to be executed
                :param path_to_support_images: path to creating a support set
                :param n_way: number of classes
                :param n_shot: number of images on each class
                :param input_size: size of image
    
            """
            self.classifier = classifier
            self.device = device
    
            self.predict_transformation = tt.Compose([
                tt.Resize((input_size, input_size)),
                tt.ToTensor()
            ])
    
            self.test_ds = ImageFolder(path_to_support_images, self.predict_transformation)
    
            self.val_loader = DataLoader(
                self.test_ds,
                batch_size= (n_way*n_shot),
                num_workers=1,
                pin_memory=True
            )
    
            self.support_images, self.support_labels = next(iter(self.val_loader))
    
    
    
        def predict (self, tensor_normalized_image):
            """
    
            :param tensor_normalized_image:
            Example of normalized image:
    
                pil_img = PIL.Image.open(img_dir)
    
                torch_img = transforms.Compose([
                    transforms.Resize((224, 224)),
                    transforms.ToTensor()
                ])(pil_img)
    
                tensor_normalized_image = tt.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])(torch_img)[None]
    
    
            :return:
    
            Return
    
            predict = tensor with prediction (mean distance of query image and support set)
            torch_max [1] = predicted class index
    
            """
    
            with torch.no_grad():
               self.classifier.eval()
               self.classifier.to(self.device)
               self.classifier.process_support_set(self.support_images.to(self.device), self.support_labels.to(self.device))
               pre_predict = self.classifier(tensor_normalized_image.to(self.device))
               predict = pre_predict.detach().data
               torch_max = torch.max(predict,1)
               class_name = self.test_ds.classes[torch_max[1].item()]
               return predict, torch_max[1], class_name
    
    

    #49

    Originally posted by @diego91964 in https://github.com/sicara/easy-few-shot-learning/issues/17#issuecomment-1157091822

    Good morning guys and many thanks for the awesome and very helpful code and the effort put to achieve that I have a question regarding novel image class prediction: Is there a way to calculate as in 'classical' classification the percentage/probability of a novel image to belonging to each class? Do you believe a softmax maybe at the 'return predict, torch_max[1], class_name' at the return tensor would have a meaning?

    Thanks in advance

    opened by iou84 3
  • ValueError : Sample Larger than population or is negative for 5 shot 2 way problem

    ValueError : Sample Larger than population or is negative for 5 shot 2 way problem

    Problem I am new to FSL and have a simple problem in my scientific domain that I thought I would try as a learning example. I am trying to perform classical training for a 5 shot 2 way problem. When I am running the code from the tutorial notebook as it is after using EasySet to create a custom data object, I am getting the following error when I encounter the validation epoch during my training:

    ValueError : Sample Larger than population or is negative

    Considered solutions I've tried changing the batch size and n_workers so far, and neither have worked

    How can we help I can't figure out what is going wrong here. I am very new to machine learning and would love to have your help in any way possible!

    enhancement question 
    opened by haricash 5
  • For custom datasets, how to divide the class?

    For custom datasets, how to divide the class?

    Hi. Thank you for your great work and sharing it with everyone.

    I have a question. For custom datasets, how to divide the class? (train, val, test) Randomly select some classes as the training set, or? Do you have any tricks?

    question 
    opened by ssx12042 15
  • Adding more backbones

    Adding more backbones

    Hi @ebennequin, Thanks for this elegant code base, some questions(can be a feature request)

    1. Can we add new backbones like (ViT, densenet, Convnext etc...)?
    2. Building functionalities for model deployments?
    enhancement 
    opened by anish9 2
Releases(v1.1.0)
  • v1.1.0(Sep 5, 2022)

  • v1.0.1(Jun 7, 2022)

    There were some things to fix after the v1 release, so we fixed them:

    • EasySet's format check is now case unsensitive (thanks @mgmalana :smile: )
    • TaskSampler used to yield torch.Tensor objects which caused errors. So now it yields lists of integers, as is standard in PyTorch's interface.
    • When EasySet's initialization didn't find any images in the specified folders, it just built an empty dataset with no warning, which caused silent errors. Now EasySet.__init__() raises the following warning if no image is found: "No images found in the specified directories. The dataset will be empty"
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Mar 21, 2022)

    🎂 Exactly 1 year after the first release of Easy FSL, we have one more year of experience in Few-Shot Learning research. We capitalize on this experience to make Easy FSL easier, cleaner, smarter.

    No more episodic training logic inside Few-Shot Learning methods: you can train them however you want. And more content! 4 additional methods; several ResNet architecture as they're often used in FSL research; and 4 ready-to-use datasets.

    🗞️ What's New

    • Few-Shot Learning methods
    • Pre-designed ResNet architecutres for Few-Shot Learning
    • Most common few-shot classification datasets
      • _tiered_ImageNet
      • _mini_ImageNet
      • CU-Birds
      • Danish Fungi (not common but new, and really great)
      • And also an abstract class FewShotDataset to ease your developement or novel or modified datasets
    • Example notebooks to perform both episodic training and classical training for your Few-Shot Learning methods
    • Support Python 3.9

    🔩 What's Changed

    • AbstractMetaLearner is renamed FewShotClassifier. All the episodic training logic has been removed from this class and moved to the example notebook episodic_training.ipynb
    • FewShotClassifier now supports non-cuda devices
    • FewShotClassifier can now be initialized with a backbone on GPU
    • Relation module in RelationNetworks can now be parameterized
    • Same for embedding modules in Matching Networks
    • Same for image preprocessing in pre-designed datasets like EasySet
    • EasySet now only collects image files

    Full Changelog: https://github.com/sicara/easy-few-shot-learning/compare/v0.2.2...v1.0.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Nov 9, 2021)

    Small fixes in EasySet and AbstractMetaLearner

    • Sort data instances for each class in EasySet

    • Add EasySet.number_of_classes()

    • Fix best validation accuracy update

    • Move switch to train mode inside fit_on_task()

    • Make AbstractMetaLearner.fit() return average loss

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 22, 2021)

  • v0.2.0(Jun 1, 2021)

    :newspaper_roll: What's new

    • :tennis: Matching Networks
    • :dna: Relation Networks
    • :mount_fuji: tieredImageNet
    • :blossom: In AbtractMetaLearner and all children classes, forward()now takes only query_images as argument. Support images and labels are now processed by process_support_set().
    • :chart_with_upwards_trend: AbstractMetaLearner.fit() now allows validation on a validation set.
    • :rainbow: EasySet.__getitem__() now forces loaded images conversion to RGB.
    • :heavy_check_mark: The code is tested
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 22, 2021)

    The initial release contains :

    • AbstractMetaLearner: an abstract class with methods that can be used for any meta-trainable algorithm
    • Prototypical Networks
    • EasySet: a ready-to-use Dataset object to handle datasets of images with a class-wise directory split
    • TaskSampler: samples batches in the shape of few-shot classification tasks
    • CU-Birds: we provide a script to download and extract the dataset, along with a meta-train/meta-val/meta-test split along classes. The dataset is ready-to-use with EasySet.
    Source code(tar.gz)
    Source code(zip)
Owner
Sicara
Sicara
A large-scale benchmark for co-optimizing the design and control of soft robots, as seen in NeurIPS 2021.

Evolution Gym A large-scale benchmark for co-optimizing the design and control of soft robots. As seen in Evolution Gym: A Large-Scale Benchmark for E

121 Dec 14, 2022
Implementation of average- and worst-case robust flatness measures for adversarial training.

Relating Adversarially Robust Generalization to Flat Minima This repository contains code corresponding to the MLSys'21 paper: D. Stutz, M. Hein, B. S

David Stutz 13 Nov 27, 2022
Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech

EdiTTS: Score-based Editing for Controllable Text-to-Speech Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech. Au

Neosapience 98 Dec 25, 2022
[ICSE2020] MemLock: Memory Usage Guided Fuzzing

MemLock: Memory Usage Guided Fuzzing This repository provides the tool and the evaluation subjects for the paper "MemLock: Memory Usage Guided Fuzzing

Cheng Wen 54 Jan 07, 2023
Code and datasets for TPAMI 2021

SkeletonNet This repository constains the codes and ShapeNetV1-Surface-Skeleton,ShapNetV1-SkeletalVolume and 2d image datasets ShapeNetRendering. Plea

34 Aug 15, 2022
Jupyter notebooks for using & learning Keras

deep-learning-with-keras-notebooks 這個github的repository主要是個人在學習Keras的一些記錄及練習。希望在學習過程中發現到一些好的資訊與範例也可以對想要學習使用 Keras來解決問題的同好,或是對深度學習有興趣的在學學生可以有一些方便理解與上手範例

ErhWen Kuo 2.1k Dec 27, 2022
A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling"

SelfGNN A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which will appear in Th

Zekarias Tilahun 24 Jun 21, 2022
EfficientNetV2 implementation using PyTorch

EfficientNetV2-S implementation using PyTorch Train Steps Configure imagenet path by changing data_dir in train.py python main.py --benchmark for mode

Jahongir Yunusov 86 Dec 29, 2022
auto-tuning momentum SGD optimizer

YellowFin YellowFin is an auto-tuning optimizer based on momentum SGD which requires no manual specification of learning rate and momentum. It measure

Jian Zhang 288 Nov 19, 2022
Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings

Text2Music Emotion Embedding Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings Reference Emotion Embedding Spaces for Matching

Minz Won 50 Dec 05, 2022
Focal Loss for Dense Rotation Object Detection

Convert ResNets weights from GluonCV to Tensorflow Abstract GluonCV released some new resnet pre-training weights and designed some new resnets (such

17 Nov 24, 2021
MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving.

MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving. It is a comprehensive framework for research purpose that integrates popular MWP benchmark datasets and typical deep learnin

119 Jan 04, 2023
Open CV - Convert a picture to look like a cartoon sketch in python

Use the video https://www.youtube.com/watch?v=k7cVPGpnels for initial learning.

Sammith S Bharadwaj 3 Jan 29, 2022
Library for 8-bit optimizers and quantization routines.

bitsandbytes Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions. Paper -- V

Facebook Research 687 Jan 04, 2023
A curated list of awesome Deep Learning tutorials, projects and communities.

Awesome Deep Learning Table of Contents Books Courses Videos and Lectures Papers Tutorials Researchers Websites Datasets Conferences Frameworks Tools

Christos 20k Jan 05, 2023
Official code release for: EditGAN: High-Precision Semantic Image Editing

Official code release for: EditGAN: High-Precision Semantic Image Editing

565 Jan 05, 2023
Train CPPNs as a Generative Model, using Generative Adversarial Networks and Variational Autoencoder techniques to produce high resolution images.

cppn-gan-vae tensorflow Train Compositional Pattern Producing Network as a Generative Model, using Generative Adversarial Networks and Variational Aut

hardmaru 343 Dec 29, 2022
GT China coal model

GT China coal model The full version of a China coal transport model with a very high spatial reslution. What it does The code works in a few steps: T

0 Dec 13, 2021
A curated list of awesome papers for Semantic Retrieval (TOIS Accepted: Semantic Models for the First-stage Retrieval: A Comprehensive Review).

A curated list of awesome papers for Semantic Retrieval (TOIS Accepted: Semantic Models for the First-stage Retrieval: A Comprehensive Review).

Yinqiong Cai 189 Dec 28, 2022
How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.

AdamBNN This is the pytorch implementation of our paper "How Do Adam and Training Strategies Help BNNs Optimization?", published in ICML 2021. In this

Zechun Liu 47 Sep 20, 2022