A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

Overview

Website, Tutorials, and Docs     

 
Uncertainty Toolbox

A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization.
Also: a glossary of useful terms and a collection of relevant papers and references.

 
Many machine learning methods return predictions along with uncertainties of some form, such as distributions or confidence intervals. This begs the questions: How do we determine which predictive uncertanties are best? What does it mean to produce a best or ideal uncertainty? Are our uncertainties accurate and well calibrated?

Uncertainty Toolbox provides standard metrics to quantify and compare predictive uncertainty estimates, gives intuition for these metrics, produces visualizations of these metrics/uncertainties, and implements simple "re-calibration" procedures to improve these uncertainties. This toolbox currently focuses on regression tasks.

Toolbox Contents

Uncertainty Toolbox contains:

  • Glossary of terms related to predictive uncertainty quantification.
  • Metrics for assessing quality of predictive uncertainty estimates.
  • Visualizations for predictive uncertainty estimates and metrics.
  • Recalibration methods for improving the calibration of a predictor.
  • Paper list: publications and references on relevant methods and metrics.

Installation

Uncertainty Toolbox requires Python 3.6+. For a lightweight installation of the package only, run:

pip install git+https://github.com/uncertainty-toolbox/uncertainty-toolbox

For a full installation with examples and tests, run:

git clone https://github.com/uncertainty-toolbox/uncertainty-toolbox.git
cd uncertainty-toolbox
pip install -e .

To verify correct installation, you can run the test suite via:

source shell/run_all_tests.sh

Quick Start

import uncertainty_toolbox as uct

# Load an example dataset of 100 predictions, uncertainties, and ground truth values
predictions, predictions_std, y, x = uct.data.synthetic_sine_heteroscedastic(100)

# Compute all uncertainty metrics
metrics = uct.metrics.get_all_metrics(predictions, predictions_std, y)

This example computes metrics for a vector of predicted values (predictions) and associated uncertainties (predictions_std, a vector of standard deviations), taken with respect to a corresponding set of ground truth values y.

Colab notebook: You can also take a look at this Colab notebook, which walks through a use case of Uncertainty Toolbox.

Metrics

Uncertainty Toolbox provides a number of metrics to quantify and compare predictive uncertainty estimates. For example, the get_all_metrics function will return:

  1. average calibration: mean absolute calibration error, root mean squared calibration error, miscalibration area.
  2. adversarial group calibration: mean absolute adversarial group calibration error, root mean squared adversarial group calibration error.
  3. sharpness: expected standard deviation.
  4. proper scoring rules: negative log-likelihood, continuous ranked probability score, check score, interval score.
  5. accuracy: mean absolute error, root mean squared error, median absolute error, coefficient of determination, correlation.

Visualizations

The following plots are a few of the visualizations provided by Uncertainty Toolbox. See this example for code to reproduce these plots.

Overconfident (too little uncertainty)

Underconfident (too much uncertainty)

Well calibrated

And here are a few of the calibration metrics for the above three cases:

Mean absolute calibration error (MACE) Root mean squared calibration error (RMSCE) Miscalibration area (MA)
Overconfident 0.19429 0.21753 0.19625
Underconfident 0.20692 0.23003 0.20901
Well calibrated 0.00862 0.01040 0.00865

Recalibration

The following plots show the results of a recalibration procedure provided by Uncertainty Toolbox, which transforms a set of predictive uncertainties to improve average calibration. The algorithm is based on isotonic regression, as proposed by Kuleshov et al.

See this example for code to reproduce these plots.

Recalibrating overconfident predictions

Mean absolute calibration error (MACE) Root mean squared calibration error (RMSCE) Miscalibration area (MA)
Before Recalibration 0.19429 0.21753 0.19625
After Recalibration 0.01124 0.02591 0.01117

Recalibrating underconfident predictions

Mean absolute calibration error (MACE) Root mean squared calibration error (RMSCE) Miscalibration area (MA)
Before Recalibration 0.20692 0.23003 0.20901
After Recalibration 0.00157 0.00205 0.00132

Contributing

We welcome and greatly appreciate contributions from the community! Please see our contributing guidelines for details on how to help out.

Citation

If you found this toolbox helpful, please cite the following paper:

@article{chung2021uncertainty,
  title={Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification},
  author={Chung, Youngseog and Char, Ian and Guo, Han and Schneider, Jeff and Neiswanger, Willie},
  journal={arXiv preprint arXiv:2109.10254},
  year={2021}
}

Additionally, here are papers that led to the development of the toolbox:

@article{chung2020beyond,
  title={Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification},
  author={Chung, Youngseog and Neiswanger, Willie and Char, Ian and Schneider, Jeff},
  journal={arXiv preprint arXiv:2011.09588},
  year={2020}
}

@article{tran2020methods,
  title={Methods for comparing uncertainty quantifications for material property predictions},
  author={Tran, Kevin and Neiswanger, Willie and Yoon, Junwoong and Zhang, Qingyang and Xing, Eric and Ulissi, Zachary W},
  journal={Machine Learning: Science and Technology},
  volume={1},
  number={2},
  pages={025006},
  year={2020},
  publisher={IOP Publishing}
}

Acknowledgments

Development of Uncertainty Toolbox is supported by the following organizations.

               

   

Comments
  • Use flit, conda-souschef, and grayskull to make PyPI/Anaconda uploads straightforward.

    Use flit, conda-souschef, and grayskull to make PyPI/Anaconda uploads straightforward.

    #59 #58 Right now, I have a version of uncertainty_toolbox uploaded to PyPI and Anaconda.

    pip install uncertainty_toolbox
    
    conda install -c sgbaird uncertainty_toolbox
    

    The basic instructions (after some one-time setup) are to install flit (e.g. conda install flit), update the version number in uncertainty_toolbox/__init__.py and run:

    flit publish
    

    to upload a new version to PyPI. I probably need to add other people's usernames to PyPI so I'm not the only one that can upload new versions.

    For the Anaconda upload, install conda-souschef and grayskull, run a slightly customized script, and build it within the scratch folder.

    conda install conda-souschef grayskull
    python run_grayskull.py
    cd scratch
    conda build .
    

    For this, you probably have to set a few things with conda first, such as automatic uploads when building, credentials, and configuring it to look in certain channels. These are all one-time setup instructions. I also have some GitHub workflow code in mat_discover that can take care of the uploads (and testing) automatically when you make a new release. Just need to change a couple lines and add credentials to GitHub secrets.

    opened by sgbaird 7
  • Added more UTs for util.py for dev branch

    Added more UTs for util.py for dev branch

    New added UTs consider:

    • function calls with no arguments for both assert_is_flat_same_shape() and assert_is_positive()

    • new UTs for assert_is_positive() function

    • based on assert_is_positive() description, a UT with 2D ndarray was included too.

    @willieneis @YoungseogChung @IanChar @HanGuo97

    Test execution output: image

    opened by marcemq 5
  • Using this package on machine learning results

    Using this package on machine learning results

    Hi,

    Thanks for making this package available to us! I have a simple question:

    I have a data set split into train/validation sets. A regressor (or classifier) is trained, and then by using the validation set I get an estimate Y_prediction. From the same validation set, I have the Y_true. So, how would you suggest to compute the standard deviation on predictions from a regressor (or classifier)?

    Many thanks,

    Ivan

    opened by ivan-marroquin 4
  • Installation on google colab

    Installation on google colab

    Hello I have tried to install the package in google colab according to your instruction. It dose not work. would you help me?

    Thanks for the great package

    opened by dara1400 4
  • uncertainty quantification of a neural network

    uncertainty quantification of a neural network

    Hi, I have a trained neural network. How can I used the Uncertainty Toolbox to quantify uncertainty of the neural network. How should I calculate predictions_std (a vector of standard deviations)?

    opened by admodadmod 3
  • Added more UTs for util.py

    Added more UTs for util.py

    New added UTs consider:

    • function calls with no arguments for both assert_is_flat_same_shape() and assert_is_positive()
    • new UTs for assert_is_positive() function
    • based on assert_is_positive() description, a UT with 2D ndarray was included too.

    @willieneis @YoungseogChung @IanChar @HanGuo97

    Test execution output: image

    opened by marcemq 2
  • Convert from symmetric confidence intervals to standard deviation

    Convert from symmetric confidence intervals to standard deviation

    Incorporating some code into another repository, and wanted to check with some people with (very likely) a more thorough statistics background than myself.

    Conversion from standard deviation to confidence intervals takes place in: https://github.com/uncertainty-toolbox/uncertainty-toolbox/blob/b2f342f6606d1d667bf9583919a663adf8643efe/uncertainty_toolbox/metrics_scoring_rule.py#L187

    What is the inverse conversion from symmetric confidence intervals back to standard deviation? (e.g. using scipy.stats.norm but doesn't have to be)

    question 
    opened by sgbaird 2
  • "layman" definition of interval score using width and coverage

    I'm trying to figure out how to describe the math behind interval score in a simple way. Based on reading the paper and related literature, I get that it incorporates width and coverage. Is the idea that it penalizes the following two scenarios:

    • wide intervals
    • when many predictions fall outside of the intervals

    In other words, the interval should be as narrow as possible while having the interval "cross the parity line" as frequently as possible, correct? How does this play out in the math?

    opened by sgbaird 2
  • MACE vs ECE

    MACE vs ECE

    Hello, thanks for sharing your code!

    I noticed that you implement the metric "MACE", while I was more familiar with the term "ECE". Is there a difference between the two?

    opened by tfjgeorge 2
  • Quantile Regression

    Quantile Regression

    Hello!

    Thanks for this convenient tool! I was wondering how you best suggest using it for quantile regression. I see that a lot of functions take in a pred-mean and std. However, I thought it would also be nice if there were additional support for quantiles and not only std + mean support. Is there a way of hacking my way through what is now so that I could use the quantile intervals I get instead?

    opened by velezbeltran 1
  • [Feature Request] Option for bivariate distribution plots in lieu of scatter plots

    [Feature Request] Option for bivariate distribution plots in lieu of scatter plots

    When dealing with "large" datasets of over 100 data points, scatter plots can become cluttered. Additionally, it's difficult to really distinguish points that are on top of each other. So a cluster of 1000 data points could look the same as a cluster of 100, but they mean very different things.

    Seaborn's bivariate displots can get around this issue by showing a density of points rather than each individual point. It'd be nice to have the option to make such plots for, say, plot_xy or plot_residuals_vs_stdevs.

    I acknowledge this would be a messy request, but it'd be a feature I'd use.

    opened by KevinTran-TRI 1
  • Make requirements even more lightweight.

    Make requirements even more lightweight.

    Some requirements are not needed in the regular requirements right now.

    1. Shapely is no longer needed. The imports in several files first need to be removed, and plot_calibration (in viz.py) needs to use the new calibration calculation code.
    2. Black and py-test can be added to the dev requirements instead of regular requirements.
    opened by IanChar 0
  • Pytorch GPU Acceleration

    Pytorch GPU Acceleration

    @tailintalent found that many of the evaluation metrics can be greatly sped up with gpu support via pytorch. This code is currently in an experimental branch called "torch". We will keep it separate from the main package for now. In order to be merged into main we will need the following:

    • The code relying on torch needs to be separated out because we do not want to make torch a mandatory install. This needs to be done in the most robust way possible, ideally separating by pinpointing the exact operations where torch provides a speedup rather than making a torch version of the pre-existing code.
    • Set up an optional installation that allows users to select the torch-enabled version of the toolbox.
    • Add options to the functions that allows users to select to use the torch version of the metric rather than the numpy version.
    enhancement 
    opened by IanChar 1
  • Should the axes on the calibration curves be switched?

    Should the axes on the calibration curves be switched?

    I've had at least two different folks come to me being very confused about the shapes of calibration curves. After a few minutes of discussion, it turns out the confusion arose from how we plot the observed values are on the y-axis and predicted values are on the x-axis. This is the opposite of conventional parity plots, where observations are on the x-axis and predictions are on the y-axis.

    What do folks thing about flipping the axes? I know this would be different than the original Kuleshov paper's graphs (and also the paper that @willieneis and I worked on together), but a part of me would rather plot what folks would expect to minimize future confusion.

    opened by KevinTran-TRI 2
  • `Could not find lib geos_c.dll or load any of its variants []` when trying to import uncertainty_toolbox (shapely issue)

    `Could not find lib geos_c.dll or load any of its variants []` when trying to import uncertainty_toolbox (shapely issue)

    After running:

    pip install git+https://github.com/uncertainty-toolbox/uncertainty-toolbox
    

    on Windows 10, VS Code, inside of conda environment, I get:

    Could not find lib geos_c.dll or load any of its variants [].
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geos.py", line 54, in load_dll
        raise OSError(
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geos.py", line 178, in <module>
        _lgeos = load_dll("geos_c.dll")
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\coords.py", line 10, in <module>
        from shapely.geos import lgeos
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geometry\base.py", line 20, in <module>
        from shapely.coords import CoordinateSequence
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\shapely\geometry\__init__.py", line 4, in <module>
        from .base import CAP_STYLE, JOIN_STYLE
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\uncertainty_toolbox\metrics_calibration.py", line 10, in <module>
        from shapely.geometry import Polygon, LineString
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\uncertainty_toolbox\metrics.py", line 9, in <module>
        from uncertainty_toolbox.metrics_calibration import (
      File "C:\Users\sterg\miniconda3\envs\vickers-hardness\Lib\site-packages\uncertainty_toolbox\__init__.py", line 9, in <module>
        from .metrics import (
      File "C:\Users\sterg\Documents\GitHub\sparks-baird\VickersHardnessPrediction\hv_prediction.py", line 21, in <module>
        import uncertainty_toolbox as uct
    

    By installing shapely from conda (https://stackoverflow.com/questions/56813083/oserror-could-not-find-geos-c-dll-or-load-any-of-its-variants),

    conda install shapely
    

    For some reason (not sure if this would be the same for everyone), I need to do an uninstall and reinstall it one more time:

    conda uninstall shapely
    conda install shapely
    

    and it seems to work OK now.

    opened by sgbaird 0
Releases(v0.1.0)
  • v0.1.0(Sep 22, 2021)

    Initial release v0.1.0 for Uncertainty Toolbox.

    Highlights

    • Metrics for assessing quality of predictive uncertainty estimates.
    • Visualizations for predictive uncertainty estimates and metrics.
    • Recalibration methods for improving the calibration of a predictor.
    • Website with a tutorial on how to use Uncertainty Toolbox.
    • Documentation and API reference for Uncertainty Toolbox.
    • Glossary of terms related to predictive uncertainty quantification.
    • Publications and references on relevant methods and metrics.
    Source code(tar.gz)
    Source code(zip)
Owner
Uncertainty Toolbox
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization.
Uncertainty Toolbox
Code and data for paper "Deep Photo Style Transfer"

deep-photo-styletransfer Code and data for paper "Deep Photo Style Transfer" Disclaimer This software is published for academic and non-commercial use

Fujun Luan 9.9k Dec 29, 2022
A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

224 Jan 04, 2023
Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

CyGNet This repository reproduces the AAAI'21 paper “Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Network

CunchaoZ 89 Jan 03, 2023
A custom DeepStack model for detecting 16 human actions.

DeepStack_ActionNET This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API fo

MOSES OLAFENWA 16 Nov 11, 2022
Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite.

TFlite Ultra Fast Lane Detection Inference Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite. So

Ibai Gorordo 12 Aug 27, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Code for KHGT model, AAAI2021

KHGT Code for KHGT accepted by AAAI2021 Please unzip the data files in Datasets/ first. To run KHGT on Yelp data, use python labcode_yelp.py For Movi

32 Nov 29, 2022
Official implementation of the paper Do pedestrians pay attention? Eye contact detection for autonomous driving

Do pedestrians pay attention? Eye contact detection for autonomous driving Official implementation of the paper Do pedestrians pay attention? Eye cont

VITA lab at EPFL 26 Nov 02, 2022
Implementation of "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner"

Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner This repository is the official implementation of Meta-rPPG: Remote Heart Ra

Eugene Lee 137 Dec 13, 2022
GAN encoders in PyTorch that could match PGGAN, StyleGAN v1/v2, and BigGAN. Code also integrates the implementation of these GANs.

MTV-TSA: Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions. This is the official code release fo

owl 37 Dec 24, 2022
City-seeds - A random generator of cultural characteristics intended to spark ideas and help draw threads

City Seeds This is a random generator of cultural characteristics intended to sp

Aydin O'Leary 2 Mar 12, 2022
Keep CALM and Improve Visual Feature Attribution

Keep CALM and Improve Visual Feature Attribution Jae Myung Kim1*, Junsuk Choe1*, Zeynep Akata2, Seong Joon Oh1† * Equal contribution † Corresponding a

NAVER AI 90 Dec 07, 2022
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
Stacked Hourglass Network with a Multi-level Attention Mechanism: Where to Look for Intervertebral Disc Labeling

⚠️ ‎‎‎ A more recent and actively-maintained version of this code is available in ivadomed Stacked Hourglass Network with a Multi-level Attention Mech

Reza Azad 14 Oct 24, 2022
Distributed Arcface Training in Pytorch

Distributed Arcface Training in Pytorch

3 Nov 23, 2021
Official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space

NeuralFusion This is the official implementation of NeuralFusion: Online Depth Map Fusion in Latent Space. We provide code to train the proposed pipel

53 Jan 01, 2023
Code for Recurrent Mask Refinement for Few-Shot Medical Image Segmentation (ICCV 2021).

Recurrent Mask Refinement for Few-Shot Medical Image Segmentation Steps Install any missing packages using pip or conda Preprocess each dataset using

XIE LAB @ UCI 39 Dec 08, 2022
A graph neural network (GNN) model to predict protein-protein interactions (PPI) with no sample features

A graph neural network (GNN) model to predict protein-protein interactions (PPI) with no sample features

2 Jul 25, 2022
Source code of generalized shuffled linear regression

Generalized-Shuffled-Linear-Regression Code for the ICCV 2021 paper: Generalized Shuffled Linear Regression. Authors: Feiran Li, Kent Fujiwara, Fumio

FEI 7 Oct 26, 2022
This repo provides the source code & data of our paper "GreaseLM: Graph REASoning Enhanced Language Models"

GreaseLM: Graph REASoning Enhanced Language Models This repo provides the source code & data of our paper "GreaseLM: Graph REASoning Enhanced Language

137 Jan 02, 2023