Adversarially Learned Inference

Related tags

Deep LearningALI
Overview

Adversarially Learned Inference

Code for the Adversarially Learned Inference paper.

Compiling the paper locally

From the repo's root directory,

$ cd papers
$ latexmk --pdf adverarially_learned_inference

Requirements

  • Blocks, development version
  • Fuel, development version

Setup

Clone the repository, then install with

$ pip install -e ALI

Downloading and converting the datasets

Set up your ~/.fuelrc file:

$ echo "data_path: \"<MY_DATA_PATH>\"" > ~/.fuelrc

Go to <MY_DATA_PATH>:

$ cd <MY_DATA_PATH>

Download the CIFAR-10 dataset:

$ fuel-download cifar10
$ fuel-convert cifar10
$ fuel-download cifar10 --clear

Download the SVHN format 2 dataset:

$ fuel-download svhn 2
$ fuel-convert svhn 2
$ fuel-download svhn 2 --clear

Download the CelebA dataset:

$ fuel-download celeba 64
$ fuel-convert celeba 64
$ fuel-download celeba 64 --clear

Training the models

Make sure you're in the repo's root directory.

CIFAR-10

$ THEANORC=theanorc python experiments/ali_cifar10.py

SVHN

$ THEANORC=theanorc python experiments/ali_svhn.py

CelebA

$ THEANORC=theanorc python experiments/ali_celeba.py

Toy task

$ THEANORC=theanorc python experiments/ali_mixture.py
$ THEANORC=theanorc python experiments/gan_mixture.py

Evaluating the models

Samples

$ THEANORC=theanorc scripts/sample [main_loop.tar]

e.g.

$ THEANORC=theanorc scripts/sample ali_cifar10.tar

Interpolations

$ THEANORC=theanorc scripts/interpolate [which_dataset] [main_loop.tar]

e.g.

$ THEANORC=theanorc scripts/interpolate celeba ali_celeba.tar

Reconstructions

$ THEANORC=theanorc scripts/reconstruct [which_dataset] [main_loop.tar]

e.g.

$ THEANORC=theanorc scripts/reconstruct cifar10 ali_cifar10.tar

Semi-supervised learning on SVHN

First, preprocess the SVHN dataset with the learned ALI features:

$ THEANORC=theanorc scripts/preprocess_representations [main_loop.tar] [save_path.hdf5]

e.g.

$ THEANORC=theanorc scripts/preprocess_representations ali_svhn.tar ali_svhn_preprocessed.hdf5

Then, launch the semi-supervised script:

$ python experiments/semi_supervised_svhn.py ali_svhn.tar [save_path.hdf5]

e.g.

$ python experiments/semi_supervised_svhn.py ali_svhn_preprocessed.hdf5

[...]
Validation error rate = ... +- ...
Test error rate = ... +- ...

Toy task

$ THEANORC=theanorc scripts/generate_mixture_plots [ali_main_loop.tar] [gan_main_loop.tar]

e.g.

$ THEANORC=theanorc scripts/generate_mixture_plots ali_mixture.tar gan_mixture.tar
Comments
  • Conditional Generation

    Conditional Generation

    I'm interested in getting the update to this codebase that includes the conditional generation, as covered in the more recent version of the paper (related image below). Can you let me know if that will be added to the repo? celeba_conditional_sequence

    opened by dribnet 8
  • mistake in D(x,z) input size

    mistake in D(x,z) input size

    In table 5 from the paper you state that the input size for D(x,z) is 1024x1x1 which I think it's wrong after looking at the previous output sizes D(x) and D(z). I think that should be 1536x1x1.

    Is that assumption correct?

    opened by edgarriba 5
  • deserialization of models hangs

    deserialization of models hangs

    Training goes well for me using the scripts in experiments with the latest version of blocks, but then when I run any subsequent command that uses the generated model like scripts/sample or scripts/reconstruct, the command hangs indefinitely. My guess is that the deserialization is getting jammed up.

    I can look into it more - not yet familiar with the new tar format - but curious if this might be a known issue.

    opened by dribnet 3
  • Fuel version problem

    Fuel version problem

    I installed the current development version of fuel, but had some issue in fuel downloading.

    $ fuel-download celeba 64 $ fuel-convert celeba 64 $ fuel-download celeba 64 --clear

    The error message I got is: fuel-download: error: unrecognized arguments: 64 if I remove 64, I got: TypeError: init() got an unexpected keyword argument 'max_value'

    Could someone please specify what version or commits of fuel and progressbar should I use? Thanks

    opened by hope-yao 1
  • Where to use the reparametrization trick

    Where to use the reparametrization trick

    In the decoder module. I found that z is sampled from N(0, 1), so where did you use the reparametrization trick described in formual (2) and (3) in the paper

    opened by wuhaozhe 0
  • semi-supervised learning

    semi-supervised learning

    Hello,I read the paper and the source code.And it mentioned 'The last three hidden layers of the encoder as well as its output are concatenated to form a 8960-dimensional feature vector.' in section 4.3 of the paper.Could you please tell me how to compute the dimension?Thanks very much

    opened by C-xiaomeng 1
  • ImportError: No module named ali.utils

    ImportError: No module named ali.utils

    I followed the same steps in the readme file, but when I run this line

    $ THEANORC=theanorc python experiments/ali_cifar10.py

    I get:

    Traceback (most recent call last):
      File "experiments/ali_cifar10.py", line 3, in <module>
        from ali.utils import get_log_odds, conv_brick, conv_transpose_brick, bn_brick
    ImportError: No module named ali.utils
    
    opened by xtarx 0
  • Preprocess_representation has a bug for me

    Preprocess_representation has a bug for me

    Hi, I was trying to reproduce the representation learning results of paper. Everything works fine except "preprocess_representations" script. It is leading to this error:

    File "scripts/preprocess_representations", line 32, in preprocess_svhn bricks=[ali.encoder.layers[-9], ali.encoder.layers[-6], AttributeError: 'GaussianConditional' object has no attribute 'layers'

    Any help would be appreciated.

    opened by MarziEd 1
  • Semi-supervised learning

    Semi-supervised learning

    I've been trying to reproduce your figures for semi-supervised learning on CIFAR-10 (19.98% with 1000 labels). This result is based on the technique proposed in Salimans et al. (2016), not SVMs. Is there any way you can include your code, or at least any changes to the hyperparameters in ali_cifar10.py?

    Thanks in advance for your help.

    opened by christiancosgrove 7
Releases(v1)
Owner
Mohamed Ishmael Belghazi
Mohamed Ishmael Belghazi
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 42 Dec 09, 2022
Official implementation of "Generating 3D Molecules for Target Protein Binding"

Generating 3D Molecules for Target Protein Binding This is the official implementation of the GraphBP method proposed in the following paper. Meng Liu

DIVE Lab, Texas A&M University 74 Dec 07, 2022
[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

AGIS-Net Introduction This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning. paper | suppl

Yue Gao 102 Jan 02, 2023
NeurIPS 2021 Datasets and Benchmarks Track

AP-10K: A Benchmark for Animal Pose Estimation in the Wild Introduction | Updates | Overview | Download | Training Code | Key Questions | License Intr

AP-10K 82 Dec 11, 2022
CVPR2022 (Oral) - Rethinking Semantic Segmentation: A Prototype View

Rethinking Semantic Segmentation: A Prototype View Rethinking Semantic Segmentation: A Prototype View, Tianfei Zhou, Wenguan Wang, Ender Konukoglu and

Tianfei Zhou 239 Dec 26, 2022
The original implementation of TNDM used in the NeurIPS 2021 paper (no longer being updated)

TNDM - Targeted Neural Dynamical Modeling Note: This code is no longer being updated. The official re-implementation can be found at: https://github.c

1 Jul 21, 2022
Create animations for the optimization trajectory of neural nets

Animating the Optimization Trajectory of Neural Nets loss-landscape-anim lets you create animated optimization path in a 2D slice of the loss landscap

Logan Yang 81 Dec 25, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Instrument Recognition.

Music Trees Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Ins

Hugo Flores García 32 Nov 22, 2022
Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).

Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).

Varun Nair 37 Dec 30, 2022
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding 📋 This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

55 Dec 21, 2022
RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting

RATCHET: RAdiological Text Captioning for Human Examined Thoraxes RATCHET is a Medical Transformer for Chest X-ray Diagnosis and Reporting. Based on t

26 Nov 14, 2022
Neural network pruning for finding a sparse computational model for controlling a biological motor task.

MothPruning Scientific Overview Originally inspired by biological nervous systems, deep neural networks (DNNs) are powerful computational tools for mo

Olivia Thomas 0 Dec 14, 2022
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

102 Dec 25, 2022
official implementation for the paper "Simplifying Graph Convolutional Networks"

Simplifying Graph Convolutional Networks Updates As pointed out by #23, there was a subtle bug in our preprocessing code for the reddit dataset. After

Tianyi 727 Jan 01, 2023
Code for our NeurIPS 2021 paper: Sparsely Changing Latent States for Prediction and Planning in Partially Observable Domains

GateL0RD This is a lightweight PyTorch implementation of GateL0RD, our RNN presented in "Sparsely Changing Latent States for Prediction and Planning i

Autonomous Learning Group 16 Nov 03, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 02, 2022
Towards Representation Learning for Atmospheric Dynamics (AtmoDist)

Towards Representation Learning for Atmospheric Dynamics (AtmoDist) The prediction of future climate scenarios under anthropogenic forcing is critical

Sebastian Hoffmann 4 Dec 15, 2022