mmfewshot is an open source few shot learning toolbox based on PyTorch

Overview

Introduction

English | 简体中文

Documentation actions codecov PyPI LICENSE Average time to resolve an issue Percentage of issues still open

mmfewshot is an open source few shot learning toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.5+. The compatibility to earlier versions of PyTorch is not fully tested.

Documentation: https://mmfewshot.readthedocs.io/en/latest/.

Major features

  • Support multiple tasks in Few Shot Learning

    MMFewShot provides unified implementation and evaluation of few shot classification and detection.

  • Modular Design

    We decompose the few shot learning framework into different components, which makes it much easy and flexible to build a new model by combining different modules.

  • Strong baseline and State of the art

    The toolbox provides strong baselines and state-of-the-art methods in few shot classification and detection.

License

This project is released under the Apache 2.0 license.

Model Zoo

Supported algorithms:

classification
Detection

Changelog

Installation

Please refer to install.md for installation of mmfewshot.

Getting Started

Please see getting_started.md for the basic usage of mmfewshot.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmfewshot2021,
    title={OpenMMLab Few Shot Learning Toolbox and Benchmark},
    author={mmfewshot Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmfewshot}},
    year={2021}
}

Contributing

We appreciate all contributions to improve mmfewshot. Please refer to CONTRIBUTING.md in MMFewShot for the contributing guideline.

Acknowledgement

mmfewshot is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM Installs OpenMMLab Packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMFewShot: OpenMMLab FewShot Learning Toolbox and Benchmark.
Comments
  • about result reimplementation of meta-rcnn

    about result reimplementation of meta-rcnn

    When trying to reproduce results of meta-rcnn and TFA, under 1 shot setting of split1, I find that reproduced results of meta-rcnn is much higher, which is confusing.In paper of meta-rcnn(this 19.9 is the result i want to get): image

    In paper of TFA: image

    Result in paper shows that result of split1 under 1 shot setting is 19.9. But my results is much higher: base training : mAP is 76.2 finetunning : all class is 47.40, novel class is 38.80, base class is 50.53 Which is much higher than results in paper. This is confusing. Besides, in the README.md of meta-rcnn, results are even higher: image

    under split1 1 shot setting, the results of TFA I get is 40.4 which is basically the same as the paper report.

    Could you please kindly answer my questions?

    opened by JulioZhao97 8
  • confused about `samples_per_gpu` of meta_dataloader

    confused about `samples_per_gpu` of meta_dataloader

    https://github.com/open-mmlab/mmfewshot/blob/486c8c2fd7929880eab0dfcd73a3dd3a512ddfbe/configs/detection/base/datasets/nway_kshot/base_voc.py#L106

    Hi, thanks for your great work in fsod. I want to know why the value of samples_per_gpu is not 15 instead of 16 for voc base training. Hope you can help me.

    opened by Wei-i 8
  • coco dataset?

    coco dataset?

    我的coco数据目录是这样的: data --coco ----annotations ----train2014 ----val2014 --few_shot_ann ----coco ------benchmark_10shot -------- ... 当我运行fsce下的coco预训练config时,会报错:no such file or directory: 'data/few_shot_ann/coco/annotaions/train.json' 请问这个train.json是哪里来的,预训练的标签不是应该调用coco文件夹下的annotations吗? 另外我在data preparation找到一个trainvalno5k.json和5k.json,请问是这两个json文件吗? 期待您的回答!

    opened by kike-0304 6
  • RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0.  Target sizes: [21, 1024].  Tensor sizes: [54, 1024]

    RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    Traceback (most recent call last): File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 289, in <module> main() File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 278, in main args) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 169, in random_init_checkpoint new_weight[:prev_cls] = pretrained_weight[:prev_cls] RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    The process of fsce on my own coco format datasets is:

    1. Base Training : ckpt(step1)
    2. step two: ues the best val pth of step 1 for train? python3.7 -m tools.detection.misc.initialize_bbox_head --src1 ./work_dirs/fsce_r101_fpn_coco_base-training/best_bbox_mAP_iter_105000.pth --method random_init --save-dir ./work_dirs/fsce_r101_fpn_coco-split1_base-training
    opened by Williamlizl 6
  • Fix tabular printing of dataset information

    Fix tabular printing of dataset information

    Motivation

    When the length of the last row_data is less than 10 and greater than 0, the row_data will not be printed

    Modification

    When the last row_data is not empty, add to table_data

    opened by LiangYang666 4
  • Few-shot instead of one-shot in demo inference

    Few-shot instead of one-shot in demo inference

    Currently, the demo script (classification) takes only one sample in the support set. It uses the process_support_images() method to forward the support set. How to modify this in order to allow for more than one sample in the support set?

    One idea could be to place another set of support images in a different folder and then forward that as well. Then the model.before_forward_support() method can be modified if it resets the features. For e.g. for meta_baseline_head, it is resetting saved features.

    Then (again for meta_baseline), meta_baseline_head.before_forward_query would also have to be modified since it is replacing the self.mean_support_feats with the mean of the new support set.

    Would these two changes in this case be enough to adapt for a few-shot instead of a one-shot inference?

    opened by rlleshi 4
  • How does it work

    How does it work

    According to the document, the following errors occur during training. I don't know how to solve them. Has anyone encountered them. TypeError: init() got an unexpected keyword argument 'persistent_workers'

    opened by isJunCheng 3
  • Question about the training of MatchingNetwork

    Question about the training of MatchingNetwork

    Hi, Great Job.

    I have some questions about the training process of the matching network(classification)

    • In this line, https://github.com/open-mmlab/mmfewshot/blob/31583cccb8ef870c9e688b1dc259263b73e58884/configs/classification/matching_net/mini_imagenet/matching-net_conv4_1xb105_mini-imagenet_5way-1shot.py?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L28 You use num_shots=5 for training 5-way-1-shot, is this a bug?
    • The batch size shown in the result table is 64, I would like to know whether this number is the training batch size or test batch size?
    • How many gaps between the meta-val and meta-test split in your experiment?
      • In the log of matching_net 5-way-1-shot, the max accuracy is about 51%, while the test result is 53%, does it means there exists ~2 points between two sets?

    Thanks, Best

    opened by tonysy 3
  • meta_test_head is None on demo

    meta_test_head is None on demo

    The error occurs when running demo_metric_classifier_1shot_inference with a custom trained NegMargin model. The meta_test_head is None. Testing the model with dist_test works as expected though. I am not sure why it didn't save the meta test head. A comment here says that it is only built and run on testing. I am not sure what that means though.

    The model config is the same as the standard in other config files:

    model = dict(
        type='NegMargin',
        backbone=dict(type='Conv4'),
        head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=-0.01,
            temperature=10.0),
        meta_test_head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=0.0,
            temperature=5.0))
    

    Otherwise, the config file itself is similar to other neg_margin config files for the cube dataset.

    opened by rlleshi 3
  • Don't find the “frozen_parameters” parameter in the relevant source code

    Don't find the “frozen_parameters” parameter in the relevant source code

    I found that the “frozen_parameters” parameter is used in many detection models, but I have not found where this parameter is used in the relevant source code. Which part of the source code should I see?

    opened by wwwbq 2
  • FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    在mmfewshot/detection/datasets/coco.py/FewShotCocoDefaultDataset 中的coco_benchmark固定了数据集路径为f'data/few_shot_ann/coco/benchmark_{shot}shot/full_box_{shot}shot_{class_name}_trainval.json'。但是我的few_shot_ann路径和上面不同,并且FewShotCocoDefaultDataset没有办法接受数据集路径的参数,希望可以增加此参数

    opened by wwwbq 2
  • 运行mpsr第一阶段报错~

    运行mpsr第一阶段报错~

    Traceback (most recent call last): File "/root/mmfewshot/./tools/detection/train.py", line 236, in main() File "/root/mmfewshot/./tools/detection/train.py", line 225, in main train_detector( File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in train_detector data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/datasets/builder.py", line 311, in build_dataloader data_loader = TwoBranchDataloader( TypeError: init() got an unexpected keyword argument 'persistent_workers' Killing subprocess 9272 Traceback (most recent call last): File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 340, in main() File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/pytorch1.8/bin/python', '-u', './tools/detection/train.py', '--local_rank=0', 'configs/detection/mpsr/voc/split1/mpsr_r101_fpn_2xb2_voc-split1_base-training.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

    opened by DaDogs 1
  • Where should I put my few shot dataset?

    Where should I put my few shot dataset?

    Since few shot dataset is just for finetuning the model and the test.py won't save the change of the model, where should I put my fewshot dataset? training set or validation set? In that way, I could use the pth file to predict my images in the demo.py?

    opened by winnie9802 0
  • The initialization is blocked on building the models in FSClassification

    The initialization is blocked on building the models in FSClassification

    We meet problem when training on classification models. We test several times, the code is blocked on this line of command in classification.api.train 截屏2022-10-15 下午12 31 58

    opened by jwfanDL 0
  • Request to add the ability to read tiff datasets

    Request to add the ability to read tiff datasets

    When I was studying the process of small sample learning, I came across tiff images in the data set. At this point, there is a problem with the dataset loading, would like to ask if you can add a tiff format read method.

    opened by Djn-swjtu 0
Releases(v0.1.0)
  • v0.1.0(Nov 24, 2021)

    Main Features

    • Support few shot classification and few shot detection.
    • For few shot classification, support fine-tune based methods (Baseline, Baseline++, NegMargin); metric-based methods (MatchingNet, ProtoNet, RelationNet, MetaBaseline); meta-learning based method (MAML).
    • For few shot detection, support fine-tune based methods (TFA, FSCE, MPSR); Meta-learning based methods (MetaRCNN, FsDetView, AttentionRPN).
    • Provide checkpoints and log files for all of the methods above.
    Source code(tar.gz)
    Source code(zip)
Calibrated Hyperspectral Image Reconstruction via Graph-based Self-Tuning Network.

mask-uncertainty-in-HSI This repository contains the testing code and pre-trained models for the paper Calibrated Hyperspectral Image Reconstruction v

JIAMIAN WANG 9 Dec 29, 2022
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

Facebook Research 94 Oct 26, 2022
Adversarial Graph Representation Adaptation for Cross-Domain Facial Expression Recognition (AGRA, ACM 2020, Oral)

Cross Domain Facial Expression Recognition Benchmark Implementation of papers: Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchm

89 Dec 09, 2022
Welcome to The Eigensolver Quantum School, a quantum computing crash course designed by students for students.

TEQS Welcome to The Eigensolver Quantum School, a crash course designed by students for students. The aim of this program is to take someone who has n

The Eigensolvers 53 May 18, 2022
TVNet: Temporal Voting Network for Action Localization

TVNet: Temporal Voting Network for Action Localization This repo holds the codes of paper: "TVNet: Temporal Voting Network for Action Localization". P

hywang 5 Jul 26, 2022
Pytorch code for "Text-Independent Speaker Verification Using 3D Convolutional Neural Networks".

:speaker: Deep Learning & 3D Convolutional Neural Networks for Speaker Verification

Amirsina Torfi 114 Dec 18, 2022
The VeriNet toolkit for verification of neural networks

VeriNet The VeriNet toolkit is a state-of-the-art sound and complete symbolic interval propagation based toolkit for verification of neural networks.

9 Dec 21, 2022
Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces

This repository contains source code for the paper Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces a

9 Nov 21, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
object recognition with machine learning on Respberry pi

Respberrypi_object-recognition object recognition with machine learning on Respberry pi line.py 建立一支與樹梅派連線的 linebot 使用此 linebot 遠端控制樹梅派拍照 config.ini l

1 Dec 11, 2021
TensorFlow implementation of Elastic Weight Consolidation

Elastic weight consolidation Introduction A TensorFlow implementation of elastic weight consolidation as presented in Overcoming catastrophic forgetti

James Stokes 67 Oct 11, 2022
Gym environment for FLIPIT: The Game of "Stealthy Takeover"

gym-flipit Gym environment for FLIPIT: The Game of "Stealthy Takeover" invented by Marten van Dijk, Ari Juels, Alina Oprea, and Ronald L. Rivest. Desi

Lisa Oakley 2 Dec 15, 2021
Interactive dimensionality reduction for large datasets

BlosSOM 🌼 BlosSOM is a graphical environment for running semi-supervised dimensionality reduction with EmbedSOM. You can use it to explore multidimen

19 Dec 14, 2022
This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model inference.

PyTorch Infer Utils This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model infer

Alex Gorodnitskiy 11 Mar 20, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Dec 31, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!

Designing a Practical Degradation Model for Deep Blind Image Super-Resolution Kai Zhang, Jingyun Liang, Luc Van Gool, Radu Timofte Computer Vision Lab

Kai Zhang 804 Jan 08, 2023
pytorch implementation of dftd2 & dftd3

torch-dftd pytorch implementation of dftd2 [1] & dftd3 [2, 3] Install # Install from pypi pip install torch-dftd # Install from source (for developer

33 Nov 28, 2022
Dyalog-apl-docset - Dyalog APL Dash Docset Generator

Dyalog APL Dash Docset Generator o alasa e kili sona kepeken tenpo lili a A Dash

Maciej Goszczycki 1 Jan 10, 2022
THIS IS THE **OLD** PYMC PROJECT. PLEASE USE PYMC3 INSTEAD:

Introduction Version: 2.3.8 Authors: Chris Fonnesbeck Anand Patil David Huard John Salvatier Web site: https://github.com/pymc-devs/pymc Documentation

PyMC 7.2k Jan 07, 2023