Hierarchical Few-Shot Generative Models

Overview

Hierarchical Few-Shot Generative Models

Giorgio Giannone, Ole Winther

This repo contains code and experiments for the paper Hierarchical Few-Shot Generative Models.


Settings

Clone the repo:

git clone https://github.com/georgosgeorgos/hierarchical-few-shot-generative-models
cd hierarchical-few-shot-generative-models

Create and activate the conda env:

conda env create -f environment.yml
conda activate hfsgm

The code has been tested on Ubuntu 18.04, Python 3.6 and CUDA 11.3

We use wandb for visualization. The first time you run the code you will need to login.

Data

We provide preprocessed Omniglot dataset.

From the main folder, copy the data in data/omniglot_ns/:

wget https://github.com/georgosgeorgos/hierarchical-few-shot-generative-models/releases/download/Omniglot/omni_train_val_test.pkl

For CelebA you need to download the dataset from here.

Dataset

In dataset we provide utilities to process and augment datasets in the few-shot setting. Each dataset is a large collection of small sets. Sets can be created dynamically. The dataset/base.py file collects basic info about the datasets. For binary datasets (omniglot_ns.py) we augment using flipping and rotations. For RGB datasets (celeba.py) we use only flipping.

Experiment

In experiment we implement scripts for model evaluation, experiments and visualizations.

  • attention.py - visualize attention weights and heads for models with learnable aggregations (LAG).
  • cardinality.py - compute ELBOs for different input set size: [1, 2, 5, 10, 20].
  • classifier_mnist.py - few-shot classifiers on MNIST.
  • kl_layer.py - compute KL over z and c for each layer in latent space.
  • marginal.py - compute approximate log-marginal likelihood with 1K importance samples.
  • refine_vis.py - visualize refined samples.
  • sampling_rgb.py - reconstruction, conditional, refined, unconditional sampling for RGB datasets.
  • sampling_transfer.py - reconstruction, conditional, refined, unconditional sampling on transfer datasets.
  • sampling.py - reconstruction, conditional, refined, unconditional sampling for binary datasets.
  • transfer.py - compute ELBOs on MNIST, DoubleMNIST, TripleMNIST.

Model

In model we implement baselines and model variants.

  • base.py - base class for all the models.
  • vae.py - Variational Autoencoder (VAE).
  • ns.py - Neural Statistician (NS).
  • tns.py - NS with learnable aggregation (NS-LAG).
  • cns.py - NS with convolutional latent space (CNS).
  • ctns.py - CNS with learnable aggregation (CNS-LAG).
  • hfsgm.py - Hierarchical Few-Shot Generative Model (HFSGM).
  • thfsgm.py - HFSGM with learnable aggregation (HFSGM-LAG).
  • chfsgm.py - HFSGM with convolutional latent space (CHFSGM).
  • cthfsgm.py - CHFSGM with learnable aggregation (CHFSGM-LAG).

Script

Scripts used for training the models in the paper.

To run a CNS on Omniglot:

sh script/main_cns.sh GPU_NUMBER omniglot_ns

Train a model

To train a generic model run:

python main.py --name {VAE, NS, CNS, CTNS, CHFSGM, CTHFSGM} \
               --model {vae, ns, cns, ctns, chfsgm, cthfsgm} \
               --augment \
               --dataset omniglot_ns \
               --likelihood binary \
               --hidden-dim 128 \
               --c-dim 32 \
               --z-dim 32 \
               --output-dir /output \
               --alpha-step 0.98 \
               --alpha 2 \
               --adjust-lr \
               --scheduler plateau \
               --sample-size {2, 5, 10} \
               --sample-size-test {2, 5, 10} \
               --num-classes 1 \
               --learning-rate 1e-4 \
               --epochs 400 \
               --batch-size 100 \
               --tag (optional string)

If you do not want to save logs, use the flag --dry_run. This flag will call utils/trainer_dry.py instead of trainer.py.


Acknowledgments

A lot of code and ideas borrowed from:

You might also like...
Official PyTorch Implementation of Hypercorrelation Squeeze for Few-Shot Segmentation, arXiv 2021
Official PyTorch Implementation of Hypercorrelation Squeeze for Few-Shot Segmentation, arXiv 2021

Hypercorrelation Squeeze for Few-Shot Segmentation This is the implementation of the paper "Hypercorrelation Squeeze for Few-Shot Segmentation" by Juh

Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch
Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch

Cross Transformers - Pytorch (wip) Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch Install $ pip install cross-t

Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation

Few-shot 3D Point Cloud Semantic Segmentation Created by Na Zhao from National University of Singapore Introduction This repository contains the PyTor

Few-Shot Graph Learning for Molecular Property Prediction

Few-shot Graph Learning for Molecular Property Prediction Introduction This is the source code and dataset for the following paper: Few-shot Graph Lea

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

The implementation of PEMP in paper
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

Adaptive Prototype Learning and Allocation for Few-Shot Segmentation (CVPR 2021)
Adaptive Prototype Learning and Allocation for Few-Shot Segmentation (CVPR 2021)

ASGNet The code is for the paper "Adaptive Prototype Learning and Allocation for Few-Shot Segmentation" (accepted to CVPR 2021) [arxiv] Overview data/

Releases(Omniglot)
Owner
Giorgio Giannone
Science is built up with data, as a house is with stones. But a collection of data is no more a science than a heap of stones is a house. (J.H. Poincaré)
Giorgio Giannone
Official code repository for the EMNLP 2021 paper

Integrating Visuospatial, Linguistic and Commonsense Structure into Story Visualization PyTorch code for the EMNLP 2021 paper "Integrating Visuospatia

Adyasha Maharana 23 Dec 19, 2022
Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021

Exploring Classification Equilibrium in Long-Tailed Object Detection (LOCE, ICCV 2021) Paper Introduction The conventional detectors tend to make imba

52 Nov 21, 2022
Self-supervised learning algorithms provide a way to train Deep Neural Networks in an unsupervised way using contrastive losses

Self-supervised learning Self-supervised learning algorithms provide a way to train Deep Neural Networks in an unsupervised way using contrastive loss

Arijit Das 2 Mar 26, 2022
基于Pytorch实现优秀的自然图像分割框架!(包括FCN、U-Net和Deeplab)

语义分割学习实验-基于VOC数据集 usage: 下载VOC数据集,将JPEGImages SegmentationClass两个文件夹放入到data文件夹下。 终端切换到目标目录,运行python train.py -h查看训练 (torch) Li Xiang 28 Dec 21, 2022

3 Apr 20, 2022
Multi-Task Learning as a Bargaining Game

Nash-MTL Official implementation of "Multi-Task Learning as a Bargaining Game". Setup environment conda create -n nashmtl python=3.9.7 conda activate

Aviv Navon 87 Dec 26, 2022
9th place solution in "Santa 2020 - The Candy Cane Contest"

Santa 2020 - The Candy Cane Contest My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place. Basic Strategy In this co

toshi_k 22 Nov 26, 2021
Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021

Towards Diverse Paragraph Captioning for Untrimmed Videos This repository contains PyTorch implementation of our paper Towards Diverse Paragraph Capti

Yuqing Song 61 Oct 11, 2022
VOGUE: Try-On by StyleGAN Interpolation Optimization

VOGUE is a StyleGAN interpolation optimization algorithm for photo-realistic try-on. Top: shirt try-on automatically synthesized by our method in two different examples.

Wei ZHANG 66 Dec 09, 2022
High-Fidelity Pluralistic Image Completion with Transformers (ICCV 2021)

Image Completion Transformer (ICT) Project Page | Paper (ArXiv) | Pre-trained Models | Supplemental Material This repository is the official pytorch i

Ziyu Wan 243 Jan 03, 2023
Code repo for "Towards Interpretable Deep Networks for Monocular Depth Estimation" paper.

InterpretableMDE A PyTorch implementation for "Towards Interpretable Deep Networks for Monocular Depth Estimation" paper. arXiv link: https://arxiv.or

Zunzhi You 16 Aug 12, 2022
This repository includes code of my study about Asynchronous in Frequency domain of GAN images.

Exploring the Asynchronous of the Frequency Spectra of GAN-generated Facial Images Binh M. Le & Simon S. Woo, "Exploring the Asynchronous of the Frequ

4 Aug 06, 2022
Using OpenAI's CLIP to upscale and enhance images

CLIP Upscaler and Enhancer Using OpenAI's CLIP to upscale and enhance images Based on nshepperd's JAX CLIP Guided Diffusion v2.4 Sample Results Viewpo

Tripp Lyons 5 Jun 14, 2022
Godot RL Agents is a fully Open Source packages that allows video game creators

Godot RL Agents The Godot RL Agents is a fully Open Source packages that allows video game creators, AI researchers and hobbiest the opportunity to le

Edward Beeching 326 Dec 30, 2022
GeneralOCR is open source Optical Character Recognition based on PyTorch.

Introduction GeneralOCR is open source Optical Character Recognition based on PyTorch. It makes a fidelity and useful tool to implement SOTA models on

57 Dec 29, 2022
ETMO: Evolutionary Transfer Multiobjective Optimization

ETMO: Evolutionary Transfer Multiobjective Optimization To promote the research on ETMO, benchmark problems are of great importance to ETMO algorithm

Songbai Liu 0 Mar 16, 2021
MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモ

Tokyo2020-Pictogram-using-MediaPipe MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモです。 Tokyo2020Pictgram02.mp4 Requirement mediapipe 0.8.6 or later O

KazuhitoTakahashi 295 Dec 26, 2022
Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

SALOD Source code of our work: "Benchmarking Deep Models for Salient Object Detection". In this works, we propose a new benchmark for SALient Object D

22 Dec 30, 2022
For visualizing the dair-v2x-i dataset

3D Detection & Tracking Viewer The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the

34 Dec 29, 2022
Differentiable simulation for system identification and visuomotor control

gradsim gradSim: Differentiable simulation for system identification and visuomotor control gradSim is a unified differentiable rendering and multiphy

105 Dec 18, 2022