Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace, SG-HMC and more

Overview

Bayesian Neural Networks

License: MIT Python 2.7+ Pytorch 1.0

Pytorch implementations for the following approximate inference methods:

We also provide code for:

Prerequisites

  • PyTorch
  • Numpy
  • Matplotlib

The project is written in python 2.7 and Pytorch 1.0.1. If CUDA is available, it will be used automatically. The models can also run on CPU as they are not excessively big.

Usage

Structure

Regression experiments

We carried out homoscedastic and heteroscedastic regression experiements on toy datasets, generated with (Gaussian Process ground truth), as well as on real data (six UCI datasets).

Notebooks/classification/(ModelName)_(ExperimentType).ipynb: Contains experiments using (ModelName) on (ExperimentType), i.e. homoscedastic/heteroscedastic. The heteroscedastic notebooks contain both toy and UCI dataset experiments for a given (ModelName).

We also provide Google Colab notebooks. This means that you can run on a GPU (for free!). No modifications required - all dependencies and datasets are added from within the notebooks - except for selecting Runtime -> Change runtime type -> Hardware accelerator -> GPU.

MNIST classification experiments

train_(ModelName)_(Dataset).py: Trains (ModelName) on (Dataset). Training metrics and model weights will be saved to the specified directories.

src/: General utilities and model definitions.

Notebooks/classification: An asortment of notebooks which allow for model training, evaluation and running of digit rotation uncertainty experiments. They also allow for weight distribution plotting and weight pruning. They allow for loading of pre-trained models for experimentation.

Bayes by Backprop (BBP)

(https://arxiv.org/abs/1505.05424)

Colab notebooks with regression models: BBP homoscedastic / heteroscedastic

Train a model on MNIST:

python train_BayesByBackprop_MNIST.py [--model [MODEL]] [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--n_samples [N_SAMPLES]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_BayesByBackprop_MNIST.py -h

Best results are obtained with a Laplace prior.

Local Reparametrisation Trick

(https://arxiv.org/abs/1506.02557)

Bayes By Backprop inference where the mean and variance of activations are calculated in closed form. Activations are sampled instead of weights. This makes the variance of the Monte Carlo ELBO estimator scale as 1/M, where M is the minibatch size. Sampling weights scales (M-1)/M. The KL divergence between gaussians can also be computed in closed form, further reducing variance. Computation of each epoch is faster and so is convergence.

Train a model on MNIST:

python train_BayesByBackprop_MNIST.py --model Local_Reparam [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--n_samples [N_SAMPLES]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

MC Dropout

(https://arxiv.org/abs/1506.02142)

A fixed dropout rate of 0.5 is set.

Colab notebooks with regression models: MC Dropout homoscedastic heteroscedastic

Train a model on MNIST:

python train_MCDropout_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_MCDropout_MNIST.py -h

Stochastic Gradient Langevin Dynamics (SGLD)

(https://www.ics.uci.edu/~welling/publications/papers/stoclangevin_v6.pdf)

In order to converge to the true posterior over w, the learning rate should be annealed according to the Robbins-Monro conditions. In practise, we use a fixed learning rate.

Colab notebooks with regression models: SGLD homoscedastic / heteroscedastic

Train a model on MNIST:

python train_SGLD_MNIST.py [--use_preconditioning [USE_PRECONDITIONING]] [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_SGLD_MNIST.py -h

pSGLD

(https://arxiv.org/abs/1512.07666)

SGLD with RMSprop preconditioning. A higher learning rate should be used than for vanilla SGLD.

Train a model on MNIST:

python train_SGLD_MNIST.py --use_preconditioning True [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

Bootstrap MAP Ensemble

Multiple networks are trained on subsamples of the dataset.

Colab notebooks with regression models: MAP Ensemble homoscedastic / heteroscedastic

Train an ensemble on MNIST:

python train_Bootrap_Ensemble_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--subsample [SUBSAMPLE]] [--n_nets [N_NETS]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_Bootrap_Ensemble_MNIST.py -h

Kronecker-Factorised Laplace

(https://openreview.net/pdf?id=Skdvd2xAZ)

Train a MAP network and then calculate a second order taylor series aproxiamtion to the curvature around a mode of the posterior. A block diagonal Hessian approximation is used, where only intra-layer dependencies are accounted for. The Hessian is further approximated as the kronecker product of the expectation of a single datapoint's Hessian factors. Approximating the Hessian can take a while. Fortunately it only needs to be done once.

Train a MAP network on MNIST and approximate Hessian:

python train_KFLaplace_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--hessian_diag_sig [HESSIAN_DIAG_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_KFLaplace_MNIST.py -h

Note that we save the unscaled and uninverted Hessian factors. This will allow for computationally cheap changes to the prior at inference time as the Hessian will not need to be re-computed. Inference will require inverting the approximated Hessian factors and sampling from a matrix normal distribution. This is shown in notebooks/KFAC_Laplace_MNIST.ipynb

Stochastic Gradient Hamiltonian Monte Carlo

(https://arxiv.org/abs/1402.4102)

We implement the scale-adapted version of this algorithm, proposed here to find hyperparameters automatically during burn-in. We place a Gaussian prior over network weights and a Gamma hyperprior over the Gaussian's precision.

Run SG-HMC-SA burn in and sampler, saving weights in specified file.

python train_SGHMC_MNIST.py [--epochs [EPOCHS]] [--sample_freq [SAMPLE_FREQ]] [--burn_in [BURN_IN]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_SGHMC_MNIST.py -h

Approximate Inference in Neural Networks

Map inference provides a point estimate of parameter values. When provided with out of distribution inputs, such as rotated digits, these models then to make wrong predictions with high confidence.

Uncertainty Decomposition

We can measure uncertainty in our models' predictions through predictive entropy. We can decompose this term in order to distinguish between 2 types of uncertainty. Uncertainty caused by noise in the data, or Aleatoric uncertainty, can be quantified as the expected entropy of model predictions. Model uncertainty or Epistemic uncertainty can be measured as the difference between total entropy and aleatoric entropy.

Results

Homoscedastic Regression

Toy homoscedastic regression task. Data is generated by a GP with a RBF kernel (l = 1, σn = 0.3). We use a single-output FC network with one hidden layer of 200 ReLU units to predict the regression mean μ(x). A fixed log σ is learnt separately.

Heteroscedastic Regression

Same scenario as previous section but log σ(x) is predicted from the input.

Toy heteroscedastic regression task. Data is generated by a GP with a RBF kernel (l = 1 σn = 0.3 · |x + 2|). We use a two-head network with 200 ReLU units to predict the regression mean μ(x) and log-standard deviation log σ(x).

Regression on UCI datasets

We performed heteroscedastic regression on the six UCI datasets (housing, concrete, energy efficiency, power plant, red wine and yacht datasets), using 10-foild cross validation. All these experiments are contained in the heteroscedastic notebooks. Note that results depend heavily on hyperparameter selection. Plots below show log-likelihoods and RMSEs on the train (semi-transparent colour) and test (solid colour). Circles and error bars correspond to the 10-fold cross validation mean and standard deviations respectively.

MNIST Classification

W is marginalised with 100 samples of the weights for all models except MAP, where only one set of weights is used.

MNIST Test MAP MAP Ensemble BBP Gaussian BBP GMM BBP Laplace BBP Local Reparam MC Dropout SGLD pSGLD
Log Like -572.9 -496.54 -1100.29 -1008.28 -892.85 -1086.43 -435.458 -828.29 -661.25
Error % 1.58 1.53 2.60 2.38 2.28 2.61 1.37 1.76 1.76

MNIST test results for methods under consideration. Estensive hyperparameter tunning has not been performed. We approximate the posterior predictive distribution with 100 MC samples. We use a FC network with two 1200 unit ReLU layers. If unspecified, the prior is Gaussian with std=0.1. P-SGLD uses RMSprop preconditioning.

The original paper for Bayes By Backprop reports around 1% error on MNIST. We find that this result is attainable only if approximate posterior variances are initialised to be very small (BBP Gauss 2). In this scenario, the distributions over weights resemble deltas, giving good predictive performance but bad uncertainty estimates. However, when initialising the variances to match the prior (BBP Gauss 1), we obtain the above results. The training curves for both of these hyperparameter configuration schemes are shown below:

MNIST Uncertainty

Total, aleatoric and epistemic uncertainties obtained when creating OOD samples by augmenting the MNIST test set with rotations:

Total and epistemic uncertainties obtained by testing our models, - which have been trained on MNIST -, on the KMNIST dataset:

Adversarial robustness

Total, aleatoric and epistemic uncertainties obtained when feeding our models with adversarial samples (fgsm).

Weight Distributions

Histograms of weights sampled from each model trained on MNIST. We draw 10 samples of w for each model.

Weight Pruning

#TODO

Owner
Machine Learning PhD student at University of Cambridge. Telecommunications (EE/CS) engineer.
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Zan Gojcic 124 Dec 27, 2022
Facial detection, landmark tracking and expression transfer library for Windows, Linux and Mac

Welcome to the CSIRO Face Analysis SDK. Documentation for the SDK can be found in doc/documentation.html. All code in this SDK is provided according t

Luiz Carlos Vieira 7 Jul 16, 2020
Prototype for Baby Action Detection and Classification

Baby Action Detection Table of Contents About Install Run Predictions Demo About An attempt to harness the power of Deep Learning to come up with a so

Shreyas K 30 Dec 16, 2022
Data and code for the paper "Importance of Kernel Bandwidth in Quantum Machine Learning"

Reproducibility materials for "Importance of Kernel Bandwidth in Quantum Machine Learning" Repo structure: code contains Python scripts used to genera

Ruslan Shaydulin 3 Oct 23, 2022
Dataset for the Research2Clinics @ NeurIPS 2021 Paper: What Do You See in this Patient? Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
[NeurIPS 2021] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data (NeurIPS 2021) This repository will provide the official PyTorch implementa

Liming Jiang 238 Nov 25, 2022
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

21 Oct 06, 2022
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

77 Dec 16, 2022
School of Artificial Intelligence at the Nanjing University (NJU)School of Artificial Intelligence at the Nanjing University (NJU)

F-Principle This is an exercise problem of the digital signal processing (DSP) course at School of Artificial Intelligence at the Nanjing University (

Thyrix 5 Nov 23, 2022
Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions

Using Clinical Drug Representations for Improving Mortality and Length of Stay Predictions Usage Clone the code to local. https://github.com/tanlab/MI

Computational Biology and Machine Learning lab @ TOBB ETU 3 Oct 18, 2022
Code for the paper BERT might be Overkill: A Tiny but Effective Biomedical Entity Linker based on Residual Convolutional Neural Networks

Biomedical Entity Linking This repo provides the code for the paper BERT might be Overkill: A Tiny but Effective Biomedical Entity Linker based on Res

Tuan Manh Lai 24 Oct 24, 2022
A python script to dump all the challenges locally of a CTFd-based Capture the Flag.

A python script to dump all the challenges locally of a CTFd-based Capture the Flag. Features Connects and logins to a remote CTFd instance. Dumps all

Podalirius 77 Dec 07, 2022
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021
The repository includes the code for training cell counting applications. (Keras + Tensorflow)

cell_counting_v2 The repository includes the code for training cell counting applications. (Keras + Tensorflow) Dataset can be downloaded here : http:

Weidi 113 Oct 06, 2022
ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation

ClevrTex This repository contains dataset generation code for ClevrTex benchmark from paper: ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi

Laurynas Karazija 26 Dec 21, 2022
Navigating StyleGAN2 w latent space using CLIP

Navigating StyleGAN2 w latent space using CLIP an attempt to build sth with the official SG2-ADA Pytorch impl kinda inspired by Generating Images from

Mike K. 55 Dec 06, 2022
Notebooks, slides and dataset of the CorrelAid Machine Learning Winter School

CorrelAid Machine Learning Winter School Welcome to the CorrelAid ML Winter School! Task The problem we want to solve is to classify trees in Roosevel

CorrelAid 12 Nov 23, 2022
Efficiently computes derivatives of numpy code.

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and

Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton 6.1k Jan 08, 2023
Intrusion Test Tool with Python

P3ntsT00L Uma ferramenta escrita em Python, feita para Teste de intrusão. Requisitos ter o python 3.9.8 instalado em sua máquina. ter a git instalada

josh washington 2 Dec 27, 2021
Reproducing-BowNet: Learning Representations by Predicting Bags of Visual Words

Reproducing-BowNet Our reproducibility effort based on the 2020 ML Reproducibility Challenge. We are reproducing the results of this CVPR 2020 paper:

6 Mar 16, 2022