⬛ Python Individual Conditional Expectation Plot Toolbox

Overview

PyCEbox

Python Individual Conditional Expectation Plot Toolbox

Individual conditional expectation plot

A Python implementation of individual conditional expecation plots inspired by R's ICEbox. Individual conditional expectation plots were introduced in Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation (arXiv:1309.6392).

Quickstart

pycebox is available on PyPI and can be installed with pip install pycebox.

The tutorial recreates the first example in the above paper using pycebox.

Development

For easy development and prototyping using IPython notebooks, a Docker environment is included. To run an IPython notebook with access to your development version of pycebox, run PORT=8889 sh ./start_container.sh. A Jupyter notebook server with access to your development version of pycebox should be available at http://localhost:8889/tree.

To run the pycebox's tests in your development container

  1. Access a bash shell on the container with docker exec -it pycebox bash.
  2. Change to the pycebox directory with cd ../pycebox
  3. Run the tests with pytest test/test.py

Documentation

For details of pycebox's API, consult the documentation.

License

This library is distributed under the MIT License.

Comments
  • Typo in ice_plot() regarding _get_quantiles()

    Typo in ice_plot() regarding _get_quantiles()

    There is a typo in the ice_plot() function when calling the _get_quantiles() function. In lines 124 and 137, the ice_plot() calls __get_quantiles() (which is undefined) instead of _get_quantiles(), which results in an error if trying to use quantiles or center the ICE curves.

    opened by savvastj 6
  • Using predicted probabilities for binary classification

    Using predicted probabilities for binary classification

    Is there any way to give some form of predict_proba function to the ice() function in order to see the probability as opposed to the prediction?

    Thanks! Nema

    opened by nemasobhani 1
  • Plot mistake

    Plot mistake

    There is a problem in the visualization part. When I am trying to plot the graph in the example, I see the following mistake:


    TypeError Traceback (most recent call last) in 12 ice_plot(ice_df, frac_to_plot=0.1, 13 color_by='x3', cmap=PuOr, ---> 14 ax=ice_ax); 15 16 ice_ax.set_xlabel('$X_2$');

    C:\ProgramData\Anaconda3\lib\site-packages\pycebox\ice.py in ice_plot(ice_data, frac_to_plot, plot_points, point_kwargs, x_quantile, plot_pdp, centered, centered_quantile, color_by, cmap, ax, pdp_kwargs, **kwargs) 128 if frac_to_plot < 1.: 129 n_cols = ice_data.shape[1] --> 130 icols = np.random.choice(n_cols, size=frac_to_plot * n_cols, replace=False) 131 plot_ice_data = ice_data.iloc[:, icols] 132 else:

    mtrand.pyx in mtrand.RandomState.choice()

    TypeError: 'float' object cannot be interpreted as an integer

    opened by karakol15 4
  • "frac_to_plot" parameter in ice_plot

    Hey Austin,

    This package rocks, thanks for publishing it!

    I have a question and a potential small bug in the ice_plot method, specifically on the "frac_to_plot" parameter.

    It is my understanding that you simply take the fraction and multiply by the number of columns, and then pass this to the "size" parameter of np.random.choice(). I think we should make sure that the number being passed is an integer, not a float. Otherwise np.random.choice() will not accept a float as a parameter for "size".

    Current: icols = np.random.choice(n_cols, size=frac_to_plot * n_cols, replace=False)

    Fix: icols = np.random.choice(n_cols, size=int(frac_to_plot * n_cols), replace=False)

    Best, Andrew

    opened by andrew-cho 1
  • Extended use to classification models, fixed typecast bug

    Extended use to classification models, fixed typecast bug

    • Extended use to classification models by allowing predict_proba to be passed to the ice_plot function.
    • Fixed 'type error when size is non-int' error for np.random.choice function
    opened by sanjifr3 0
  • Averaging ICE plots across multiple runs/folds of a model

    Averaging ICE plots across multiple runs/folds of a model

    Hi Austin,

    I was wondering if it is possible to average across multiple runs/folds of the same model.

    I am trying at the moment, but the resulting ICE plots do not make sense. The per run plots make sense but when I average them across both runs and folds the data gets screwed.

    Cheers,

    Dan

    opened by danieltudosiu 0
Releases(0.0.1)
Owner
Austin Rochford
Chief Data Scientist @ Kibo Commerce, recovering mathematician, enthusiastic Bayesian
Austin Rochford
A library for debugging/inspecting machine learning classifiers and explaining their predictions

ELI5 ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following m

2.6k Dec 30, 2022
Python Library for Model Interpretation/Explanations

Skater Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system

Oracle 1k Dec 27, 2022
An intuitive library to add plotting functionality to scikit-learn objects.

Welcome to Scikit-plot Single line functions for detailed visualizations The quickest and easiest way to go from analysis... ...to this. Scikit-plot i

Reiichiro Nakano 2.3k Dec 31, 2022
Python implementation of R package breakDown

pyBreakDown Python implementation of breakDown package (https://github.com/pbiecek/breakDown). Docs: https://pybreakdown.readthedocs.io. Requirements

MI^2 DataLab 41 Mar 17, 2022
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Webis 42 Aug 14, 2022
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Dec 30, 2022
Lime: Explaining the predictions of any machine learning classifier

lime This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predict

Marco Tulio Correia Ribeiro 10.3k Jan 01, 2023
Bias and Fairness Audit Toolkit

The Bias and Fairness Audit Toolkit Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers

Data Science for Social Good 513 Jan 06, 2023
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
Model analysis tools for TensorFlow

TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on

1.2k Dec 26, 2022
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

56 Jan 03, 2023
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Benedek Rozemberczki 187 Dec 27, 2022
Interpretability and explainability of data and machine learning models

AI Explainability 360 (v0.2.1) The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datase

1.2k Dec 29, 2022
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Distributed (Deep) Machine Learning Community 143 Jan 07, 2023
Pytorch implementation of convolutional neural network visualization techniques

Convolutional Neural Network Visualizations This repository contains a number of convolutional neural network visualization techniques implemented in

Utku Ozbulak 7k Jan 03, 2023
A collection of infrastructure and tools for research in neural network interpretability.

Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!

4.5k Jan 07, 2023
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Hendrik Strobelt 1.1k Jan 04, 2023
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 03, 2023
Convolutional neural network visualization techniques implemented in PyTorch.

This repository contains a number of convolutional neural network visualization techniques implemented in PyTorch.

1 Nov 06, 2021