mmfewshot is an open source few shot learning toolbox based on PyTorch

Overview

Introduction

English | 简体中文

Documentation actions codecov PyPI LICENSE Average time to resolve an issue Percentage of issues still open

mmfewshot is an open source few shot learning toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.5+. The compatibility to earlier versions of PyTorch is not fully tested.

Documentation: https://mmfewshot.readthedocs.io/en/latest/.

Major features

  • Support multiple tasks in Few Shot Learning

    MMFewShot provides unified implementation and evaluation of few shot classification and detection.

  • Modular Design

    We decompose the few shot learning framework into different components, which makes it much easy and flexible to build a new model by combining different modules.

  • Strong baseline and State of the art

    The toolbox provides strong baselines and state-of-the-art methods in few shot classification and detection.

License

This project is released under the Apache 2.0 license.

Model Zoo

Supported algorithms:

classification
Detection

Changelog

Installation

Please refer to install.md for installation of mmfewshot.

Getting Started

Please see getting_started.md for the basic usage of mmfewshot.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmfewshot2021,
    title={OpenMMLab Few Shot Learning Toolbox and Benchmark},
    author={mmfewshot Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmfewshot}},
    year={2021}
}

Contributing

We appreciate all contributions to improve mmfewshot. Please refer to CONTRIBUTING.md in MMFewShot for the contributing guideline.

Acknowledgement

mmfewshot is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM Installs OpenMMLab Packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMFewShot: OpenMMLab FewShot Learning Toolbox and Benchmark.
Comments
  • about result reimplementation of meta-rcnn

    about result reimplementation of meta-rcnn

    When trying to reproduce results of meta-rcnn and TFA, under 1 shot setting of split1, I find that reproduced results of meta-rcnn is much higher, which is confusing.In paper of meta-rcnn(this 19.9 is the result i want to get): image

    In paper of TFA: image

    Result in paper shows that result of split1 under 1 shot setting is 19.9. But my results is much higher: base training : mAP is 76.2 finetunning : all class is 47.40, novel class is 38.80, base class is 50.53 Which is much higher than results in paper. This is confusing. Besides, in the README.md of meta-rcnn, results are even higher: image

    under split1 1 shot setting, the results of TFA I get is 40.4 which is basically the same as the paper report.

    Could you please kindly answer my questions?

    opened by JulioZhao97 8
  • confused about `samples_per_gpu` of meta_dataloader

    confused about `samples_per_gpu` of meta_dataloader

    https://github.com/open-mmlab/mmfewshot/blob/486c8c2fd7929880eab0dfcd73a3dd3a512ddfbe/configs/detection/base/datasets/nway_kshot/base_voc.py#L106

    Hi, thanks for your great work in fsod. I want to know why the value of samples_per_gpu is not 15 instead of 16 for voc base training. Hope you can help me.

    opened by Wei-i 8
  • coco dataset?

    coco dataset?

    我的coco数据目录是这样的: data --coco ----annotations ----train2014 ----val2014 --few_shot_ann ----coco ------benchmark_10shot -------- ... 当我运行fsce下的coco预训练config时,会报错:no such file or directory: 'data/few_shot_ann/coco/annotaions/train.json' 请问这个train.json是哪里来的,预训练的标签不是应该调用coco文件夹下的annotations吗? 另外我在data preparation找到一个trainvalno5k.json和5k.json,请问是这两个json文件吗? 期待您的回答!

    opened by kike-0304 6
  • RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0.  Target sizes: [21, 1024].  Tensor sizes: [54, 1024]

    RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    Traceback (most recent call last): File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 289, in <module> main() File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 278, in main args) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 169, in random_init_checkpoint new_weight[:prev_cls] = pretrained_weight[:prev_cls] RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    The process of fsce on my own coco format datasets is:

    1. Base Training : ckpt(step1)
    2. step two: ues the best val pth of step 1 for train? python3.7 -m tools.detection.misc.initialize_bbox_head --src1 ./work_dirs/fsce_r101_fpn_coco_base-training/best_bbox_mAP_iter_105000.pth --method random_init --save-dir ./work_dirs/fsce_r101_fpn_coco-split1_base-training
    opened by Williamlizl 6
  • Fix tabular printing of dataset information

    Fix tabular printing of dataset information

    Motivation

    When the length of the last row_data is less than 10 and greater than 0, the row_data will not be printed

    Modification

    When the last row_data is not empty, add to table_data

    opened by LiangYang666 4
  • Few-shot instead of one-shot in demo inference

    Few-shot instead of one-shot in demo inference

    Currently, the demo script (classification) takes only one sample in the support set. It uses the process_support_images() method to forward the support set. How to modify this in order to allow for more than one sample in the support set?

    One idea could be to place another set of support images in a different folder and then forward that as well. Then the model.before_forward_support() method can be modified if it resets the features. For e.g. for meta_baseline_head, it is resetting saved features.

    Then (again for meta_baseline), meta_baseline_head.before_forward_query would also have to be modified since it is replacing the self.mean_support_feats with the mean of the new support set.

    Would these two changes in this case be enough to adapt for a few-shot instead of a one-shot inference?

    opened by rlleshi 4
  • How does it work

    How does it work

    According to the document, the following errors occur during training. I don't know how to solve them. Has anyone encountered them. TypeError: init() got an unexpected keyword argument 'persistent_workers'

    opened by isJunCheng 3
  • Question about the training of MatchingNetwork

    Question about the training of MatchingNetwork

    Hi, Great Job.

    I have some questions about the training process of the matching network(classification)

    • In this line, https://github.com/open-mmlab/mmfewshot/blob/31583cccb8ef870c9e688b1dc259263b73e58884/configs/classification/matching_net/mini_imagenet/matching-net_conv4_1xb105_mini-imagenet_5way-1shot.py?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L28 You use num_shots=5 for training 5-way-1-shot, is this a bug?
    • The batch size shown in the result table is 64, I would like to know whether this number is the training batch size or test batch size?
    • How many gaps between the meta-val and meta-test split in your experiment?
      • In the log of matching_net 5-way-1-shot, the max accuracy is about 51%, while the test result is 53%, does it means there exists ~2 points between two sets?

    Thanks, Best

    opened by tonysy 3
  • meta_test_head is None on demo

    meta_test_head is None on demo

    The error occurs when running demo_metric_classifier_1shot_inference with a custom trained NegMargin model. The meta_test_head is None. Testing the model with dist_test works as expected though. I am not sure why it didn't save the meta test head. A comment here says that it is only built and run on testing. I am not sure what that means though.

    The model config is the same as the standard in other config files:

    model = dict(
        type='NegMargin',
        backbone=dict(type='Conv4'),
        head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=-0.01,
            temperature=10.0),
        meta_test_head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=0.0,
            temperature=5.0))
    

    Otherwise, the config file itself is similar to other neg_margin config files for the cube dataset.

    opened by rlleshi 3
  • Don't find the “frozen_parameters” parameter in the relevant source code

    Don't find the “frozen_parameters” parameter in the relevant source code

    I found that the “frozen_parameters” parameter is used in many detection models, but I have not found where this parameter is used in the relevant source code. Which part of the source code should I see?

    opened by wwwbq 2
  • FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    在mmfewshot/detection/datasets/coco.py/FewShotCocoDefaultDataset 中的coco_benchmark固定了数据集路径为f'data/few_shot_ann/coco/benchmark_{shot}shot/full_box_{shot}shot_{class_name}_trainval.json'。但是我的few_shot_ann路径和上面不同,并且FewShotCocoDefaultDataset没有办法接受数据集路径的参数,希望可以增加此参数

    opened by wwwbq 2
  • 运行mpsr第一阶段报错~

    运行mpsr第一阶段报错~

    Traceback (most recent call last): File "/root/mmfewshot/./tools/detection/train.py", line 236, in main() File "/root/mmfewshot/./tools/detection/train.py", line 225, in main train_detector( File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in train_detector data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/datasets/builder.py", line 311, in build_dataloader data_loader = TwoBranchDataloader( TypeError: init() got an unexpected keyword argument 'persistent_workers' Killing subprocess 9272 Traceback (most recent call last): File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 340, in main() File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/pytorch1.8/bin/python', '-u', './tools/detection/train.py', '--local_rank=0', 'configs/detection/mpsr/voc/split1/mpsr_r101_fpn_2xb2_voc-split1_base-training.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

    opened by DaDogs 1
  • Where should I put my few shot dataset?

    Where should I put my few shot dataset?

    Since few shot dataset is just for finetuning the model and the test.py won't save the change of the model, where should I put my fewshot dataset? training set or validation set? In that way, I could use the pth file to predict my images in the demo.py?

    opened by winnie9802 0
  • The initialization is blocked on building the models in FSClassification

    The initialization is blocked on building the models in FSClassification

    We meet problem when training on classification models. We test several times, the code is blocked on this line of command in classification.api.train 截屏2022-10-15 下午12 31 58

    opened by jwfanDL 0
  • Request to add the ability to read tiff datasets

    Request to add the ability to read tiff datasets

    When I was studying the process of small sample learning, I came across tiff images in the data set. At this point, there is a problem with the dataset loading, would like to ask if you can add a tiff format read method.

    opened by Djn-swjtu 0
Releases(v0.1.0)
  • v0.1.0(Nov 24, 2021)

    Main Features

    • Support few shot classification and few shot detection.
    • For few shot classification, support fine-tune based methods (Baseline, Baseline++, NegMargin); metric-based methods (MatchingNet, ProtoNet, RelationNet, MetaBaseline); meta-learning based method (MAML).
    • For few shot detection, support fine-tune based methods (TFA, FSCE, MPSR); Meta-learning based methods (MetaRCNN, FsDetView, AttentionRPN).
    • Provide checkpoints and log files for all of the methods above.
    Source code(tar.gz)
    Source code(zip)
An AI made using artificial intelligence (AI) and machine learning algorithms (ML) .

DTech.AIML An AI made using artificial intelligence (AI) and machine learning algorithms (ML) . This is created by help of some members in my team and

1 Jan 06, 2022
masscan + nmap + Finger

说明 个人根据使用习惯修改masnmap而来的一个小工具。调用masscan做全端口扫描,再调用nmap做服务识别,最后调用Finger做Web指纹识别。工具使用场景适合风险探测排查、众测等。 使用方法 安装依赖 pip3 install -r requirements.txt -i https:/

Ryan 3 Mar 25, 2022
VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations

VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations 3D-aware Image Synthesis via Learning Structural and Textura

GenForce: May Generative Force Be with You 116 Dec 26, 2022
Exponential Graph is Provably Efficient for Decentralized Deep Training

Exponential Graph is Provably Efficient for Decentralized Deep Training This code repository is for the paper Exponential Graph is Provably Efficient

3 Apr 20, 2022
NVIDIA Deep Learning Examples for Tensor Cores

NVIDIA Deep Learning Examples for Tensor Cores Introduction This repository provides State-of-the-Art Deep Learning examples that are easy to train an

NVIDIA Corporation 10k Dec 31, 2022
Bilinear attention networks for visual question answering

Bilinear Attention Networks This repository is the implementation of Bilinear Attention Networks for the visual question answering and Flickr30k Entit

Jin-Hwa Kim 506 Nov 29, 2022
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
Code base for the paper "Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation"

This repository contains code for the paper Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiati

8 Aug 28, 2022
A library for uncertainty quantification based on PyTorch

Torchuq [logo here] TorchUQ is an extensive library for uncertainty quantification (UQ) based on pytorch. TorchUQ currently supports 10 representation

TorchUQ 96 Dec 12, 2022
Chinese license plate recognition

AgentCLPR 简介 一个基于 ONNXRuntime、AgentOCR 和 License-Plate-Detector 项目开发的中国车牌检测识别系统。 车牌识别效果 支持多种车牌的检测和识别(其中单层车牌识别效果较好): 单层车牌: [[[[373, 282], [69, 284],

AgentMaker 26 Dec 25, 2022
DeepOBS: A Deep Learning Optimizer Benchmark Suite

DeepOBS - A Deep Learning Optimizer Benchmark Suite DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation

Aaron Bahde 7 May 12, 2020
Official Repository for our ICCV2021 paper: Continual Learning on Noisy Data Streams via Self-Purified Replay

Continual Learning on Noisy Data Streams via Self-Purified Replay This repository contains the official PyTorch implementation for our ICCV2021 paper.

Jinseo Jeong 22 Nov 23, 2022
Lightweight Cuda Renderer with Python Wrapper.

pyRender Lightweight Cuda Renderer with Python Wrapper. Compile Change compile.sh line 5 to the glm library include path. This library can be download

Jingwei Huang 53 Dec 02, 2022
YolactEdge: Real-time Instance Segmentation on the Edge

YolactEdge, the first competitive instance segmentation approach that runs on small edge devices at real-time speeds. Specifically, YolactEdge runs at up to 30.8 FPS on a Jetson AGX Xavier (and 172.7

Haotian Liu 1.1k Jan 06, 2023
Expert Finding in Legal Community Question Answering

Expert Finding in Legal Community Question Answering Arian Askari, Suzan Verberne, and Gabriella Pasi. Expert Finding in Legal Community Question Answ

Arian Askari 3 Oct 31, 2022
PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models

PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models This repository is the official implementation of the fol

DistributedML 41 Dec 06, 2022
Self-supervised Product Quantization for Deep Unsupervised Image Retrieval - ICCV2021

Self-supervised Product Quantization for Deep Unsupervised Image Retrieval Pytorch implementation of SPQ Accepted to ICCV 2021 - paper Young Kyun Jang

Young Kyun Jang 71 Dec 27, 2022
Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".

nvdiffrec Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D

NVIDIA Research Projects 1.4k Jan 01, 2023
Learning Efficient Online 3D Bin Packing on Packing Configuration Trees

Learning Efficient Online 3D Bin Packing on Packing Configuration Trees This repository is being continuously updated, please stay tuned! Any code con

86 Dec 28, 2022
This is a work in progress reimplementation of Instant Neural Graphics Primitives

Neural Hash Encoding This is a work in progress reimplementation of Instant Neural Graphics Primitives Currently this can train an implicit representa

Penn 79 Sep 01, 2022