Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Overview

Multi-Task Meta-Learning Modification with Stochastic Approximation

This repository contains the code for the paper
"Multi-Task Meta-Learning Modification with Stochastic Approximation".

Method pipeline

Dependencies

This code has been tested on Ubuntu 16.04 with Python 3.8 and PyTorch 1.8.

To install the required dependencies:

pip install -r requirements.txt

Usage

To reproduce the results on benchmarks described in our article, use the following scripts. To vary types of the experiments, change the parameters of the scripts responsible for benchmark dataset, shot and way (e.g. miniImageNet 1-shot 5-way or CIFAR-FS 5-shot 2-way).

MAML

Multi-task modification (MTM) for Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017).

Multi-task modifications for MAML are trained on top of baseline MAML model which has to be trained beforehand.

To train MAML (reproduced) on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-miniimagenet \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name mini-imagenet-mtm-spsa-track \
    --load "./results/reproduced-miniimagenet/model.th" \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-track \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-tieredimagenet \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name tiered-imagenet-mtm-spsa \
    --load "./results/reproduced-tieredimagenet/model.th" \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-delta \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-fc100 \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Coarse on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name fc100-mtm-spsa-coarse \
    --load "./results/reproduced-fc100/model.th" \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --task-weighting spsa-per-coarse-class \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-cifar \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 600 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Inner First-Order on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-inner-first-order \
    --load "./results/reproduced-cifar/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --use-inner-optimizer \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Backprop on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-backprop \
    --load "./results/reproduced-cifar-5shot-5way/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To test any of the above-described benchmarks, run:

python maml/test.py ./results/path-to-config/config.json --num-steps 10 --use-cuda

For instance, to test MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/test.py ./results/mini-imagenet-mtm-spsa-track/config.json --num-steps 10 --use-cuda

Prototypical Networks

Multi-task modification (MTM) for Prototypical Networks (ProtoNet) (Snell et al., 2017).

To train ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset miniImageNet \
    --network ResNet12 \
    --tracking \
    --train-shot 1 \
    --train-way 5 \
    --val-shot 1 \
    --val-way 5

To test ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/test.py --dataset miniImageNet --network ResNet12 --shot 1 --way 5

To train ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset CIFAR_FS \
    --train-weights \
    --train-weights-layer \
    --train-shot 1 \
    --train-way 2 \
    --val-shot 1 \
    --val-way 2

To test ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 5-way benchmark, run:

python protonet/test.py --dataset CIFAR_FS --shot 1 --way 2

To train ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset FC100 \
    --train-weights \
    --train-weights-opt \
    --train-shot 10 \
    --train-way 5 \
    --val-shot 10 \
    --val-way 5

To test ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/test.py --dataset FC100 --shot 10 --way 5

To train ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset tieredImageNet \
    --train-shot 5 \
    --train-way 2 \
    --val-shot 5 \
    --val-way 2

To test ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/test.py --dataset tieredImageNet --shot 5 --way 2

Acknowledgments

Our code uses some dataloaders from Torchmeta.

Code in maml folder is based on the extended implementation from Torchmeta and pytorch-maml. The code has been updated so that baseline scores more closely follow those of the original MAML paper.

Code in protonet folder is based on the implementation from MetaOptNet. All .py files in this folder except for dataloaders.py and optimize.py were adopted from this implementation and modified afterwards. A copy of Apache License, Version 2.0 is available in protonet folder.

Owner
Andrew
Andrew
Pytorch implementation of MaskGIT: Masked Generative Image Transformer

Pytorch implementation of MaskGIT: Masked Generative Image Transformer

Dominic Rampas 247 Dec 16, 2022
DAN: Unfolding the Alternating Optimization for Blind Super Resolution

DAN-Basd-on-Openmmlab DAN: Unfolding the Alternating Optimization for Blind Super Resolution We reproduce DAN via mmediting based on open-sourced code

AlexZou 72 Dec 13, 2022
Locationinfo - A script helps the user to show network information such as ip address

Description This script helps the user to show network information such as ip ad

Roxcoder 1 Dec 30, 2021
Face Alignment using python

Face Alignment Face Alignment using python Input Image Aligned Face Aligned Face Aligned Face Input Image Aligned Face Input Image Aligned Face Instal

Sajjad Aemmi 28 Nov 23, 2022
Official Pytorch implementation of MixMo framework

MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks Official PyTorch implementation of the MixMo framework | paper | docs Alexandr

79 Nov 07, 2022
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data

Introduction PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch. Key features include: Data structure for

Facebook Research 6.8k Jan 01, 2023
Combinatorially Hard Games where the levels are procedurally generated

puzzlegen Implementation of two procedurally simulated environments with gym interfaces. IceSlider: the agent needs to reach and stop on the pink squa

Autonomous Learning Group 3 Jun 26, 2022
This repository compare a selfie with images from identity documents and response if the selfie match.

aws-rekognition-facecompare This repository compare a selfie with images from identity documents and response if the selfie match. This code was made

1 Jan 27, 2022
Bunch of different tools which helps visualizing and annotating images for semantic/instance segmentation tasks

Data Framework for Semantic/Instance Segmentation Bunch of different tools which helps visualizing, transforming and annotating images for semantic/in

Bruno Fernandes Carvalho 5 Dec 21, 2022
Real-time VIBE: Frame by Frame Inference of VIBE (Video Inference for Human Body Pose and Shape Estimation)

Real-time VIBE Inference VIBE frame-by-frame. Overview This is a frame-by-frame inference fork of VIBE at [https://github.com/mkocabas/VIBE]. Usage: i

23 Jul 02, 2022
Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features"

EDM-subgenre-classifier This repository contains the code for "Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Fea

11 Dec 20, 2022
⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

Optimized Einsum Optimized Einsum: A tensor contraction order optimizer Optimized einsum can significantly reduce the overall execution time of einsum

Daniel Smith 653 Dec 30, 2022
An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

Luna Yue Huang 41 Oct 29, 2022
Code used to generate the results appearing in "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"

Train longer, generalize better - Big batch training This is a code repository used to generate the results appearing in "Train longer, generalize bet

Elad Hoffer 145 Sep 16, 2022
Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On

UPMT Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On See main.py as an example: from model import PopM

7 Sep 01, 2022
Cluttered MNIST Dataset

Cluttered MNIST Dataset A setup script will download MNIST and produce mnist/*.t7 files: luajit download_mnist.lua Example usage: local mnist_clutter

DeepMind 50 Jul 12, 2022
High-performance moving least squares material point method (MLS-MPM) solver.

High-Performance MLS-MPM Solver with Cutting and Coupling (CPIC) (MIT License) A Moving Least Squares Material Point Method with Displacement Disconti

Yuanming Hu 2.2k Dec 31, 2022
A torch implementation of "Pixel-Level Domain Transfer"

Pixel Level Domain Transfer A torch implementation of "Pixel-Level Domain Transfer". based on dcgan.torch. Dataset The dataset used is "LookBook", fro

Fei Xia 260 Sep 02, 2022
CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)

CrossNorm (CN) and SelfNorm (SN) (Accepted at ICCV 2021) This is the official PyTorch implementation of our CNSN paper, in which we propose CrossNorm

100 Dec 28, 2022
GANmouflage: 3D Object Nondetection with Texture Fields

GANmouflage: 3D Object Nondetection with Texture Fields Rui Guo1 Jasmine Collins

29 Aug 10, 2022