Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights

Overview

Piggyback: https://arxiv.org/abs/1801.06519

Pretrained masks and backbones are available here: https://uofi.box.com/s/c5kixsvtrghu9yj51yb1oe853ltdfz4q

Datasets in PyTorch format are available here: https://uofi.box.com/s/ixncr3d85guosajywhf7yridszzg5zsq
All rights belong to the respective publishers. The datasets are provided only to aid reproducibility.

The PyTorch-friendly Places365 dataset can be downloaded from http://places2.csail.mit.edu/download.html

Place masks in checkpoints/ and unzipped datasets in data/

VGG-16 ResNet-50 DenseNet-121
CUBS 20.75 18.23 19.24
Stanford Cars 11.78 10.19 10.62
Flowers 6.93 4.77 4.91
WikiArt 29.80 28.57 29.33
Sketch 22.30 19.75 20.05

Note that the numbers in the paper are averaged over multiple runs for each ordering of datasets. These numbers were obtained by evaluating the models on a Titan X (Pascal). Note that numbers on other GPUs might be slightly different (~0.1%) owing to cudnn algorithm selection. https://discuss.pytorch.org/t/slightly-different-results-on-k-40-v-s-titan-x/10064

Requirements:

Python 2.7 or 3.xx
torch==0.2.0.post3
torchvision==0.1.9
torchnet (pip install git+https://github.com/pytorch/[email protected])
tqdm (pip install tqdm)

Run all code from the src/ directory, e.g. ./scripts/run_piggyback_training.sh

Training:

Check out src/scripts/run_piggyback_training.sh.

This script uses the default hyperparams and trains a model as described in the paper. The best performing model on the val set is saved to disk. This saved model includes the real-valued mask weights.

By default, we use the models provided by torchvision as our backbone networks. If you intend to evaluate with the masks provided by us, please use the correct version of torch and torchvision. In case you want to use a different version, but still want to use our masks, then download the pytorch_backbone networks provided in the box link above. Make appropriate changes to your pytorch code to load those backbone models.

Saving trained masks only.

Check out src/scripts/run_packing.sh.

This extracts the binary/ternary masks from the above trained models, and saves them separately.

Eval:

Use the saved masks, apply them to a backbone network and run eval.

By default, our backbone models are those provided with torchvision.
Note that to replicate our results, you have to use the package versions specified above.
Newer package versions might have different weights for the backbones, and the provided masks won't work.

cd src  # Run everything from src/

CUDA_VISIBLE_DEVICES=0 python pack.py --mode eval --dataset flowers \
  --arch vgg16 \
  --maskloc ../checkpoints/vgg16_binary.pt
Owner
Arun Mallya
NVIDIA Research
Arun Mallya
We provided a matlab implementation for an evolutionary multitasking AUC optimization framework (EMTAUC).

EMTAUC We provided a matlab implementation for an evolutionary multitasking AUC optimization framework (EMTAUC). In this code, SBGA is considered a ba

7 Nov 24, 2022
IMBENS: class-imbalanced ensemble learning in Python.

IMBENS: class-imbalanced ensemble learning in Python. Links: [Documentation] [Gallery] [PyPI] [Changelog] [Source] [Download] [知乎/Zhihu] [中文README] [a

Zhining Liu 176 Jan 04, 2023
Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》

Child-Tuning Source code for EMNLP 2021 Long paper: Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning. 1. Environ

46 Dec 12, 2022
Code for CVPR2019 paper《Unequal Training for Deep Face Recognition with Long Tailed Noisy Data》

Unequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data. This is the code of CVPR 2019 paper《Unequal Training for Deep Face Recognition

Zhong Yaoyao 68 Jan 07, 2023
Source Code for ICSE 2022 Paper - ``Can We Achieve Fairness Using Semi-Supervised Learning?''

Fair-SSL Source Code for ICSE 2022 Paper - Can We Achieve Fairness Using Semi-Supervised Learning? Ethical bias in machine learning models has become

1 Dec 18, 2021
A trashy useless Latin programming language written in python.

Codigum! The first programming langage in latin! (please keep your eyes closed when if you read the source code) It is pretty useless though. Document

Bic 2 Oct 25, 2021
State-of-the-art language models can match human performance on many tasks

Status: Archive (code is provided as-is, no updates expected) Grade School Math [Blog Post] [Paper] State-of-the-art language models can match human p

OpenAI 259 Jan 08, 2023
The repository for our EMNLP 2021 paper "Finnish Dialect Identification: The Effect of Audio and Text"

Finnish Dialect Identification The repository for our EMNLP 2021 paper "Finnish Dialect Identification: The Effect of Audio and Text". We present a te

Rootroo Ltd 2 Dec 25, 2021
Pytorch implementation of "ARM: Any-Time Super-Resolution Method"

ARM-Net Dependencies Python 3.6 Pytorch 1.7 Results Train Data preprocessing cd data_scripts python extract_subimages_test.py python data_augmentation

Bohong Chen 55 Nov 24, 2022
Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking

Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking We revisit and address issues with Oxford 5k and Paris 6k image retrieval benchm

Filip Radenovic 188 Dec 17, 2022
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020)

This repo is the official implementation of our paper "Instance Adaptive Self-training for Unsupervised Domain Adaptation". The purpose of this repo is to better communicate with you and respond to y

CVSM Group - email: <a href=[email protected]"> 84 Dec 12, 2022
Fang Zhonghao 13 Nov 19, 2022
Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

Wang jiahao 3 Oct 31, 2022
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"

EgoNet Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation". This repo inclu

Shichao Li 138 Dec 09, 2022
Framework for abstracting Amiga debuggers and access to AmigaOS libraries and devices.

Framework for abstracting Amiga debuggers. This project provides abstration to control an Amiga remotely using a debugger. The APIs are not yet stable

Roc Vallès 39 Nov 22, 2022
Implementation of Pix2Seq in PyTorch

pix2seq-pytorch Implementation of Pix2Seq paper Different from the paper image input size 1280 bin size 1280 LambdaLR scheduler used instead of Linear

Tony Shin 9 Dec 15, 2022
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023
Course on computational design, non-linear optimization, and dynamics of soft systems at UIUC.

Computational Design and Dynamics of Soft Systems · This is a repository that contains the source code for generating the lecture notes, handouts, exe

Tejaswin Parthasarathy 4 Jul 21, 2022
Everything's Talkin': Pareidolia Face Reenactment (CVPR2021)

Everything's Talkin': Pareidolia Face Reenactment (CVPR2021) Linsen Song, Wayne Wu, Chaoyou Fu, Chen Qian, Chen Change Loy, and Ran He [Paper], [Video

71 Dec 21, 2022