Code for the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness"

Related tags

Deep LearningDU-VAE
Overview

DU-VAE

This is the pytorch implementation of the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness"

Acknowledgements

Our code is mainly based on this public code. Very thanks for its authors.

Requirements

  • Python >= 3.6
  • Pytorch >= 1.5.0

Data

Datastes used in this paper can be downloaded in this link, with the specific license if that is not based on MIT License.

Usage

Example script to train DU-VAE on text data:

python text.py --dataset yelp \
 --device cuda:0  \
--gamma 0.5 \
--p_drop 0.2 \
--delta_rate 1 \
--kl_start 0 \
--warm_up 10

Example script to train DU-VAE on image data:

python3.6 image.py --dataset omniglot \
 --device cuda:3 \
--kl_start 0 \
--warm_up 10 \
--gamma 0.5  \
--p_drop 0.1 \
--delta_rate 1 \
--dataset omniglot

Example script to train DU-IAF, a variant of DU-VAE, on text data:

python3.6 text_IAF.py --device cuda:2 \
--dataset yelp \
--gamma 0.6 \
--p_drop 0.3 \
--delta_rate 1 \
--kl_start 0 \
--warm_up 10 \
--flow_depth 2 \
--flow_width 60

Example script to train DU-IAF on image data:

python3.6 image_IAF.py --dataset omniglot\
  --device cuda:3 \
--kl_start 0 \
--warm_up 10 \
--gamma 0.5 \
 --p_drop 0.15\
 --delta_rate 1 \
--flow_depth 2\
--flow_width 60 

Here,

  • --dataset specifies the dataset name, currently it supports synthetic, yahoo, yelp for text.py and omniglot for image.py.
  • --kl_start represents starting KL weight (set to 1.0 to disable KL annealing)
  • --warm_up represents number of annealing epochs (KL weight increases from kl_start to 1.0 linearly in the first warm_up epochs)
  • --gamma represents the parameter $\gamma$ in our Batch-Normalization approach, which should be more than 0 to use our model.
  • --p_drop represents the parameter $1-p$ in our Dropout approach, which denotes the percent of data to be ignored and should be ranged in (0,1).
  • --delta_rate represents the hyper-parameter $\alpha$ to controls the min value of the variance $\delta^2$
  • --flow_depth represents number of MADE layers used to implement DU-IAF.
  • --flow_wdith controls the hideen size in each IAF block, where we set the product between the value and the dimension of $z$ as the hidden size. For example, when we set --flow width 60 with the dimension of $z$ as 32, the hidden size of each IAF block is 1920.

Reference

If you find our methods or code helpful, please kindly cite the paper:

@inproceedings{shen2021regularizing,
  title={Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness},
  author={Shen, Dazhong  and Qin, Chuan and Wang, Chao and Zhu, Hengshu and Chen, Enhong and Xiong, Hui},
  booktitle={Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI-21)},
  year={2021}
}
Owner
Dazhong Shen
Dazhong Shen
Implementation of hyperparameter optimization/tuning methods for machine learning & deep learning models

Hyperparameter Optimization of Machine Learning Algorithms This code provides a hyper-parameter optimization implementation for machine learning algor

Li Yang 1.1k Dec 19, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

308 Jan 04, 2023
Code for testing various M1 Chip benchmarks with TensorFlow.

M1, M1 Pro, M1 Max Machine Learning Speed Test Comparison This repo contains some sample code to benchmark the new M1 MacBooks (M1 Pro and M1 Max) aga

Daniel Bourke 348 Jan 04, 2023
repro_eval is a collection of measures to evaluate the reproducibility/replicability of system-oriented IR experiments

repro_eval repro_eval is a collection of measures to evaluate the reproducibility/replicability of system-oriented IR experiments. The measures were d

IR Group at Technische Hochschule Köln 9 May 25, 2022
Direct design of biquad filter cascades with deep learning by sampling random polynomials.

IIRNet Direct design of biquad filter cascades with deep learning by sampling random polynomials. Usage git clone https://github.com/csteinmetz1/IIRNe

Christian J. Steinmetz 55 Nov 02, 2022
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022
Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Net

Diego Porres 185 Dec 24, 2022
Check out the StyleGAN repo and place it in the same directory hierarchy as the present repo

Variational Model Inversion Attacks Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani Most commands are in run_scripts. W

Jackson Wang 15 Dec 26, 2022
Blender Python - Node-based multi-line text and image flowchart

MindMapper v0.8 Node-based text and image flowchart for Blender Mindmap with shortcuts visible: Mindmap with shortcuts hidden: Notes This was requeste

SpectralVectors 58 Oct 08, 2022
fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

Ali Abdalla 34 Jan 05, 2023
Adaptable tools to make reinforcement learning and evolutionary computation algorithms.

Pearl The Parallel Evolutionary and Reinforcement Learning Library (Pearl) is a pytorch based package with the goal of being excellent for rapid proto

38 Jan 01, 2023
Ludwig Benchmarking Toolkit

Ludwig Benchmarking Toolkit The Ludwig Benchmarking Toolkit is a personalized benchmarking toolkit for running end-to-end benchmark studies across an

HazyResearch 17 Nov 18, 2022
NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

NEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training

Göktuğ Karakaşlı 16 Dec 05, 2022
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 08, 2023
This script runs neural style transfer against the provided content image.

Neural Style Transfer Content Style Output Description: This script runs neural style transfer against the provided content image. The content image m

Martynas Subonis 0 Nov 25, 2021
Evaluation framework for testing segmentation networks in PyTorch

Evaluation framework for testing segmentation networks in PyTorch. What segmentation network to choose for next Kaggle competition? This benchmark knows the answer!

Eugene Khvedchenya 37 Apr 27, 2022
Code of our paper "Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning"

CCOP Code of our paper Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning Requirement Install OpenSelfSup Install Detectron2

Chenhongyi Yang 21 Dec 13, 2022
PyTorch implementation of our paper: Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition

Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition, arxiv This is a PyTorch implementation of our paper. 1. Re

DamoCV 11 Nov 19, 2022