This is the repo for Uncertainty Quantification 360 Toolkit.

Overview

UQ360

Build Status Documentation Status

The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncertainty, as well as capabilities to measure and improve UQ to streamline the development process. We provide a taxonomy and guidance for choosing these capabilities based on the user's needs. Further, UQ360 makes the communication method of UQ an integral part of development choices in an AI lifecycle. Developers can make a user-centered choice by following the psychology-based guidance on communicating UQ estimates, from concise descriptions to detailed visualizations.

The UQ360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

We have developed the package with extensibility in mind. This library is still in development. We encourage the contribution of your uncertianty estimation algorithms, metrics and applications. To get started as a contributor, please join the #uq360-users or #uq360-developers channel of the AIF360 Community on Slack by requesting an invitation here.

Supported Uncertainty Evaluation Metrics

The toolbox provides several standard calibration metrics for classification and regression tasks. This includes Expected Calibration Error (Naeini et al., 2015), Brier Score (Brier, 1950), etc for classification models. Regression metrics include Prediction Interval Coverage Probability (PICP) Chatfield, 1993 and Mean Prediction Interval Width (MPIW) (Shrestha and Solomatine, 2006) among others. The toolbox also provides a novel operation-point agnostic approaches for the assessment of prediction uncertainty estimates called the Uncertainty Characteristic Curve (UCC) (Navratil et al., 2021). Several metrics and diagnosis tools such as reliability diagram (DeGroot and Fienberg, 1983) and risk-vs-rejection rate curves (El-Yaniv et al., 2010) are provides which also support analysis by sub-groups in the population to study fairness implications of acting on given uncertainty estimates.

Supported Uncertainty Estimation Algorithms

UQ algorithms can be broadly classified as intrinsic or extrinsic depending on how the uncertainties are obtained from the AI models. Intrinsic methods encompass models that inherently provides an uncertainty estimate along with its predictions. The toolkit includes algorithms such as variational Bayesian neural networks (BNNs) (Blundell et al., 2015), Gaussian processes (Rasmussen and Williams,2006), quantile regression (Koenker and Bassett, 1978) and hetero/homo-scedastic neuralnetworks (Wakefield, 2013) which are models that fall in this category. The toolkit also includes Horseshoe BNNs (Ghosh et al., 2019) that use sparsity promoting priors and can lead to better-calibrated uncertainties, especially in the small data regime. An Infinitesimal Jackknife (IJ) based algorithm (Ghosh et al., 2020)), provided in the toolkit, is a perturbation-based approach that perform uncertainty quantification by estimating model parameters under different perturbations of the original data. Crucially, here the estimation only requires the model to be trained once on the unperturbed dataset. For models that do not have an inherent notion of uncertainty built into them, extrinsic methods are employed to extract uncertainties post-hoc. The toolkit provides meta-models (Chen et al., 2019)that can be been used to successfully generate reliable confidence measures (in classification), prediction intervals (in regression), and to predict performance metrics such as accuracy on unseen and unlabeled data. For pre-trained models that captures uncertainties to some degree, the toolbox provides extrinsic algorithms that can improve the uncertainty estimation quality. This includes isotonic regression (Zadrozny and Elkan, 2001), Platt-scaling (Platt, 1999), auxiliary interval predictors (Thiagarajan et al., 2020), and UCC-Recalibration (Navratil et al., 2021).

Setup

Supported Configurations:

OS Python version
macOS 3.7
Ubuntu 3.7
Windows 3.7

(Optional) Create a virtual environment

A virtual environment manager is strongly recommended to ensure dependencies may be installed safely. If you have trouble installing the toolkit, try this first.

Conda

Conda is recommended for all configurations though Virtualenv is generally interchangeable for our purposes. Miniconda is sufficient (see the difference between Anaconda and Miniconda if you are curious) and can be installed from here if you do not already have it.

Then, to create a new Python 3.7 environment, run:

conda create --name uq360 python=3.7
conda activate uq360

The shell should now look like (uq360) $. To deactivate the environment, run:

(uq360)$ conda deactivate

The prompt will return back to $ or (base)$.

Note: Older versions of conda may use source activate uq360 and source deactivate (activate uq360 and deactivate on Windows).

Installation

Clone the latest version of this repository:

(uq360)$ git clone https://github.ibm.com/UQ360/UQ360

If you'd like to run the examples and tutorial notebooks, download the datasets now and place them in their respective folders as described in uq360/datasets/data/README.md.

Then, navigate to the root directory of the project which contains setup.py file and run:

(uq360)$ pip install -e .

PIP Installation of Uncertainty Quantification 360

If you would like to quickly start using the UQ360 toolkit without cloning this repository, then you can install the uq360 pypi package as follows.

(your environment)$ pip install uq360

If you follow this approach, you may need to download the notebooks in the examples folder separately.

Using UQ360

The examples directory contains a diverse collection of jupyter notebooks that use UQ360 in various ways. Both examples and tutorial notebooks illustrate working code using the toolkit. Tutorials provide additional discussion that walks the user through the various steps of the notebook. See the details about tutorials and examples here.

Citing UQ360

A technical description of UQ360 is available in this paper. Below is the bibtex entry for this paper.

@misc{uq360-june-2021,
      title={Uncertainty Quantification 360: A Holistic Toolkit for Quantifying 
      and Communicating the Uncertainty of AI}, 
      author={Soumya Ghosh and Q. Vera Liao and Karthikeyan Natesan Ramamurthy 
      and Jiri Navratil and Prasanna Sattigeri 
      and Kush R. Varshney and Yunfeng Zhang},
      year={2021},
      eprint={2106.01410},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

Acknowledgements

UQ360 is built with the help of several open source packages. All of these are listed in setup.py and some of these include:

License Information

Please view both the LICENSE file present in the root directory for license information.

Owner
International Business Machines
International Business Machines
Compare outputs between layers written in Tensorflow and layers written in Pytorch

Compare outputs of Wasserstein GANs between TensorFlow vs Pytorch This is our testing module for the implementation of improved WGAN in Pytorch Prereq

Hung Nguyen 72 Dec 20, 2022
Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022

PGNet Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022, CVPR 2022 (arXiv 2204.05041) Abstract Recent salient objec

CVTEAM 109 Dec 05, 2022
Largest list of models for Core ML (for iOS 11+)

Since iOS 11, Apple released Core ML framework to help developers integrate machine learning models into applications. The official documentation We'v

Kedan Li 5.6k Jan 08, 2023
Official Pytorch implementation of Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

Scene Representation Networks This is the official implementation of the NeurIPS submission "Scene Representation Networks: Continuous 3D-Structure-Aw

Vincent Sitzmann 365 Jan 06, 2023
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

LEAR The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction". See below for an overview of

杨攀 93 Jan 07, 2023
Source code for CVPR 2020 paper "Learning to Forget for Meta-Learning"

L2F - Learning to Forget for Meta-Learning Sungyong Baik, Seokil Hong, Kyoung Mu Lee Source code for CVPR 2020 paper "Learning to Forget for Meta-Lear

Sungyong Baik 29 May 22, 2022
Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"

Self-Supervised-MVS This repository is the official PyTorch implementation of our AAAI 2021 paper: "Self-supervised Multi-view Stereo via Effective Co

hongbin_xu 127 Jan 04, 2023
Auxiliary data to the CHIIR paper Searching to Learn with Instructional Scaffolding

Searching to Learn with Instructional Scaffolding This is the data and analysis code for the paper "Searching to Learn with Instructional Scaffolding"

Arthur Câmara 2 Mar 02, 2022
Arquitetura e Desenho de Software.

S203 Este é um repositório dedicado às aulas de Arquitetura e Desenho de Software, cuja sigla é "S203". E agora, José? Como não tenho muito a falar aq

Fabio 7 Oct 23, 2021
Datasets and pretrained Models for StyleGAN3 ...

Datasets and pretrained Models for StyleGAN3 ... Dear arfiticial friend, this is a collection of artistic datasets and models that we have put togethe

lucid layers 34 Oct 06, 2022
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
PPO is a very popular Reinforcement Learning algorithm at present.

PPO is a very popular Reinforcement Learning algorithm at present. OpenAI takes PPO as the current baseline algorithm. We use the PPO algorithm to train a policy to give the best action in any situat

Rosefintech 11 Aug 23, 2021
Speech Emotion Recognition with Fusion of Acoustic- and Linguistic-Feature-Based Decisions

APSIPA-SER-with-A-and-T This code is the implementation of Speech Emotion Recognition (SER) with acoustic and linguistic features. The network model i

kenro515 3 Jan 04, 2023
Python SDK for building, training, and deploying ML models

Overview of Kubeflow Fairing Kubeflow Fairing is a Python package that streamlines the process of building, training, and deploying machine learning (

Kubeflow 325 Dec 13, 2022
Automatic Idiomatic Expression Detection

IDentifier of Idiomatic Expressions via Semantic Compatibility (DISC) An Idiomatic identifier that detects the presence and span of idiomatic expressi

5 Jun 09, 2022
PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS.

PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS. With Live, you can build a working mobile app ML demo in minutes.

559 Jan 01, 2023
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech

Cross-Speaker-Emotion-Transfer - PyTorch Implementation PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Conditio

Keon Lee 114 Jan 08, 2023
CLASP - Contrastive Language-Aminoacid Sequence Pretraining

CLASP - Contrastive Language-Aminoacid Sequence Pretraining Repository for creating models pretrained on language and aminoacid sequences similar to C

Michael Pieler 133 Dec 29, 2022