Confidence intervals for scikit-learn forest algorithms

Overview

forest-confidence-interval: Confidence intervals for Forest algorithms

Travis Status Coveralls Status CircleCI Status status

Forest algorithms are powerful ensemble methods for classification and regression. However, predictions from these algorithms do contain some amount of error. Prediction variability can illustrate how influential the training set is for producing the observed random forest predictions.

forest-confidence-interval is a Python module that adds a calculation of variance and computes confidence intervals to the basic functionality implemented in scikit-learn random forest regression or classification objects. The core functions calculate an in-bag and error bars for random forest objects.

Compatible with Python2.7 and Python3.6

This module is based on R code from Stefan Wager (see important links below) and is licensed under the MIT open source license (see LICENSE)

Important Links

scikit-learn - http://scikit-learn.org/

Stefan Wager's randomForestCI - https://github.com/swager/randomForestCI (deprecated in favor of grf: https://github.com/swager/grf)

Installation and Usage

Before installing the module you will need numpy, scipy and scikit-learn. Dependencies associated with the previous modules may need root privileges to install Consult the API Reference for documentation on core functionality

pip install numpy scipy scikit-learn

can also install dependencies with:

 pip install -r requirements.txt

To install forest-confidence-interval execute:

pip install forestci

or, if you are installing from the source code:

python setup.py install

If would like to install the development version of the software use:

pip install git+git://github.com/scikit-learn-contrib/forest-confidence-interval.git

Why use forest-confidence-interval?

Our software is designed for individuals using scikit-learn random forest objects that want to add estimates of uncertainty to random forest predictors. Prediction variability demonstrates how much the training set influences results and is important for estimating standard errors. forest-confidence-interval is a Python module for calculating variance and adding confidence intervals to the popular Python library scikit-learn. The software is compatible with both scikit-learn random forest regression or classification objects.

Examples

The examples (gallery below) demonstrates the package functionality with random forest classifiers and regression models. The regression example uses a popular UCI Machine Learning data set on cars while the classifier example simulates how to add measurements of uncertainty to tasks like predicting spam emails.

Examples gallery

Contributing

Contributions are very welcome, but we ask that contributors abide by the contributor covenant.

To report issues with the software, please post to the issue log Bug reports are also appreciated, please add them to the issue log after verifying that the issue does not already exist. Comments on existing issues are also welcome.

Please submit improvements as pull requests against the repo after verifying that the existing tests pass and any new code is well covered by unit tests. Please write code that complies with the Python style guide, PEP8.

E-mail Ariel Rokem, Kivan Polimis, or Bryna Hazelton if you have any questions, suggestions or feedback.

Testing

Requires installation of nose package. Tests are located in the forestci/tests folder and can be run with the nosetests command in the main directory.

Citation

Click on the JOSS status badge for the Journal of Open Source Software article on this project. The BibTeX citation for the JOSS article is below:

@article{polimisconfidence,
  title={Confidence Intervals for Random Forests in Python},
  author={Polimis, Kivan and Rokem, Ariel and Hazelton, Bryna},
  journal={Journal of Open Source Software},
  volume={2},
  number={1},
  year={2017}
}
Comments
  • ENH: Allow forestci to work on general Bagging estimators

    ENH: Allow forestci to work on general Bagging estimators

    Resolves #99

    This PR adds functionality to forestci.py to inspect the "forest" estimator to see if it is a random forest (i.e. inherits from BaseForest) or a bagging estimator (i.e. inherits from BaseBagging). There are some differences in the private attributes of these classes so the distinction is necessary. When the estimator is a random forest, all of the existing code applies. When it inherits from BaseBagging, we use the .estimators_samples_ attribute for the calc_inbag function. And when calibrating inside random_forest_error, it is also necessary to randomly permute the _seeds array attribute of new_forest. I've also added some tests for these new features.

    I believe this PR makes forestci work well with general bagging estimators. However, I would greatly appreciate it if @arokem, @kpolimis, @bhazelton could check my work here. Most importantly, is this sensible? I think I've made the APIs compatible but am I making a mistake in applying Wager's method to general bagging methods (and not exclusively to random forests)?

    opened by richford 7
  • Bug memory kws

    Bug memory kws

    Just tried out this package, looks like a great implementation.

    I ran this on a large dataset (much bigger than memory) and ran into the following problem that the keywords were not being passed along. Was there a reason for this?

    If not, small fix is in this PR.

    opened by owlas 7
  • negative V_IJ_unbiased

    negative V_IJ_unbiased

    Hi,

    first of all, great work, this is a great tool! I have a couple of questions based on issues I've encountered when playing with the package. Apologies if these reveal my misunderstanding rather than an actual issue with the coding.

    1. When running the confidence interval calculation on a forest I trained, I encounter negative values of the unbiased variances. Additionally, the more trees my forest has, the more of these negative values appear. Could there be some kind of bias overcorrection?

    2. The _bias_correction function in the module calculates n_var parameter, that it then applies to the bias correction vector. However, no such expression appears in Eqn. (7) of the Wagner et al. (2014), according to which the bias correction should be n_train_samples * boot_var / n_trees (using the variable names from the package code). Where does n_var come from?

    3. I don't see any parameter regulating the number of bootstrap draws. Even though O(n) draws should be enough to take case of the Monte Carlo noise, it should still be possible to control this somehow. If I change the n_samples parameter, this clashes with the pred matrix, which is fixed to the number of trees in the forest. How to regulate the number of draws?

    4. In fact, if I'm reading the paper right, the idea is to look at how the predictions from the individual trees change when using different bootstrap samples of the original data. That doesn't seem to be what the package is doing, which is using predictions from a single forest on a set of test data instead of predictions of multiple forests of a single new sample. Where is my understanding wrong?

    Thanks and again, let me know if what I'm asking is off-topic for here.

    Ondrej

    opened by ondrejiayc 7
  • MRG: Calibration with empirical Bayes.

    MRG: Calibration with empirical Bayes.

    This is the work of hzhao16 from #48, but without some large data files that got added into the history along the way. Also, several added PEP8 fixes, and more comprehensive testing.

    This extends and supersedes #48

    opened by arokem 6
  • Not compatible with SKLearn version 0.22.1

    Not compatible with SKLearn version 0.22.1

    A newer version of SciKit Learn modified _generate_sample_indices() to require an additional n_samples_bootstrap argument, thus the current version of the code will raise a TypeError: _generate_sample_indices() missing 1 required positional argument: 'n_samples_bootstrap' when running fci.random_forest_error(mpg_forest, mpg_X_train, mpg_X_test).

    opened by csanadpoda 4
  • Usage in practised application

    Usage in practised application

    Hi,

    Firstly, thanks for the amazing work! I just have a question that how we support to use the error bar? Specifically for the RandomForestClassifier. The example only uses the result for plotting ...

    Thanks and look forward to hearing from you

    opened by JIAZHEN 4
  • Error running plot_mpg notebook

    Error running plot_mpg notebook

    I ran the plot_mpg notebook code:

    
    # Regression Forest Example
    import numpy as np
    from matplotlib import pyplot as plt
    from sklearn.ensemble import RandomForestRegressor
    import sklearn.cross_validation as xval
    from sklearn.datasets.mldata import fetch_mldata
    import forestci as fci
    
    # retreive mpg data from machine learning library
    mpg_data = fetch_mldata('mpg')
    
    # separate mpg data into predictors and outcome variable
    mpg_X = mpg_data["data"]
    mpg_y = mpg_data["target"]
    
    # split mpg data into training and test set
    mpg_X_train, mpg_X_test, mpg_y_train, mpg_y_test = xval.train_test_split(
                                                       mpg_X, mpg_y,
                                                       test_size=0.25,
                                                       random_state=42
                                                       )
    
    # create RandomForestRegressor
    n_trees = 2000
    mpg_forest = RandomForestRegressor(n_estimators=n_trees, random_state=42)
    mpg_forest.fit(mpg_X_train, mpg_y_train)
    mpg_y_hat = mpg_forest.predict(mpg_X_test)
    
    # calculate inbag and unbiased variance
    mpg_inbag = fci.calc_inbag(mpg_X_train.shape[0], mpg_forest)
    mpg_V_IJ_unbiased = fci.random_forest_error(mpg_forest, mpg_X_train,
                                                mpg_X_test)
    
    # Plot error bars for predicted MPG using unbiased variance
    plt.errorbar(mpg_y_test, mpg_y_hat, yerr=np.sqrt(mpg_V_IJ_unbiased), fmt='o')
    plt.plot([5, 45], [5, 45], '--')
    plt.xlabel('Reported MPG')
    plt.ylabel('Predicted MPG')
    plt.show()
    

    and got the following error:

    TypeError                                 Traceback (most recent call last)
    <ipython-input-2-a0d96d55b892> in <module>()
         30 mpg_inbag = fci.calc_inbag(mpg_X_train.shape[0], mpg_forest)
         31 mpg_V_IJ_unbiased = fci.random_forest_error(mpg_forest, mpg_X_train,
    ---> 32                                             mpg_X_test)
         33 
         34 # Plot error bars for predicted MPG using unbiased variance
    
    TypeError: random_forest_error() missing 1 required positional argument: 'X_test'
    
    My environment is Anaconda python 4.3.1.
    
    Charles
    
    opened by CBrauer 4
  • Receiving strange TypeError

    Receiving strange TypeError

    I have the following code:

    df = pd.read_csv('data.csv', header=0, engine='c')
    mat = df.as_matrix()
    X = mat[:, 1:]
    X_train, X_test = train_test_split(X, test_size = 0.2)
    variance = forestci.random_forest_error(model, X_train, X_test)
    

    When I run it, it throws the error TypeError: random_forest_error() takes exactly 4 arguments (3 given).

    However, there are only three non-optional arguments listed in the documentation. If I add a fourth argument for inbag, I then get an error saying that inbag is defined twice. Any ideas of what's causing this? I'm happy to write a PR if you point me towards the cause.

    opened by finbarrtimbers 4
  • Handle MultiOutput model

    Handle MultiOutput model

    Hi, I suggest this modification to handle with multi-output estimators. This will solve Issue https://github.com/scikit-learn-contrib/forest-confidence-interval/issues/54, i.e., the oldest open issue on this repo!

    Scikit-Learn's RandomForestRegressor can automatically switch to a MultiOutput model if the y_train contains multiple targets. However ,forest-confidence-interval could not handle them.

    One solution would imply to compute and return a 2-dim array with the variance for each target, for each sample. However, this would break some past compatibility (because it would make sense to print a 2-d (1,N)-array even with one target) but especially, it would require an extensive check on all the tensors operations. What I propose here is to input y_output (int), telling the program which output to use. This may not be the most efficient solution, as there is some redundancy in running random_forest_error() if you want to run it for each output... but it is very intuitive to understand, totally back-compatible, and a simple modification.

    Thanks again for this nice project, to which I'm happy to contribute for the second time. I hope this gets merged soon.

    Daniele

    opened by danieleongari 3
  • Warning: sklearn.ensemble.forest module is deprecated in version 0.22

    Warning: sklearn.ensemble.forest module is deprecated in version 0.22

    Hi, when I use forstci, which is great, I get the following warning, which is harmless for now:

    The sklearn.ensemble.forest module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.ensemble. Anything that cannot be imported from sklearn.ensemble is now part of the private API.

    It might hit us in the future

    opened by sq5rix 3
  • Error with `random_forest_error`

    Error with `random_forest_error`

    Submitting an error report here, just for record purposes.

    With the following line:

    pred_error = fci.random_forest_error(clf, X_train=X_train, X_test=X_test, inbag=None)
    

    I get the following error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-60-79c18cb1c841> in <module>()
    ----> 1 pred_error = fci.random_forest_error(clf, X_train=X_train, X_test=X_test, inbag=None)
    
    ~/anaconda/envs/targetpred/lib/python3.6/site-packages/forestci/forestci.py in random_forest_error(forest, inbag, X_train, X_test)
        115     pred_centered = pred - pred_mean
        116     n_trees = forest.n_estimators
    --> 117     V_IJ = _core_computation(X_train, X_test, inbag, pred_centered, n_trees)
        118     V_IJ_unbiased = _bias_correction(V_IJ, inbag, pred_centered, n_trees)
        119     return V_IJ_unbiased
    
    ~/anaconda/envs/targetpred/lib/python3.6/site-packages/forestci/forestci.py in _core_computation(X_train, X_test, inbag, pred_centered, n_trees)
         57 
         58     for t_idx in range(n_trees):
    ---> 59         inbag_r = (inbag[:, t_idx] - 1).reshape(-1, 1)
         60         pred_c_r = pred_centered.T[t_idx].reshape(1, -1)
         61         cov_hat += np.dot(inbag_r, pred_c_r) / n_trees
    
    TypeError: 'NoneType' object is not subscriptable
    

    I am using version 0.1.0, installed from pip.

    I think a new release is required; after inspecting the source code, I'm seeing that inbag=None is no longer a required keyword argument (contrary to what my installed version is saying), and that inbag=None is handled correctly in the GitHub version (contrary to how my installed version is working).

    opened by ericmjl 3
  • New Release

    New Release

    Hello, would it be possible to create a new release to include #111 in a release version? :)
    I'm not a fan of having to pull git versions of packages.
    Thank you!

    opened by DasCapschen 1
  • Array dimensions incorrect for confidence intervals

    Array dimensions incorrect for confidence intervals

    Hi,

    I'm trying to create error estimates and am using RandomForestRegressor with bootstrapping enabled. I am using data with dimensions:

    x_test [10,13] x_train [90,13] y_test [10,2] y_train [90,2]

    I then generate errors using:

    y_error = fci.random_forest_error(self.model, self.x_train, self.x_test)
    
    

    However I get the error:

    Generating point estimates...
    [Parallel(n_jobs=4)]: Using backend ThreadingBackend with 4 concurrent workers.
    [Parallel(n_jobs=4)]: Done  33 tasks      | elapsed:    0.0s
    [Parallel(n_jobs=4)]: Done 100 out of 100 | elapsed:    0.0s finished
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    /tmp/ipykernel_2626600/1096083143.py in <module>
    ----> 1 point_estimates = model.point_estimate(save_estimates=True, make_plots=False)
          2 print(point_estimates)
    
    /scratch/wiay/lara/galpro/galpro/model.py in point_estimate(self, save_estimates, make_plots)
        158         # Use the model to make predictions on new objects
        159         y_pred = self.model.predict(self.x_test)
    --> 160         y_error = fci.random_forest_error(self.model, self.x_train, self.x_test)
        161 
        162         # Update class variables
    
    ~/.local/lib/python3.7/site-packages/forestci/forestci.py in random_forest_error(forest, X_train, X_test, inbag, calibrate, memory_constrained, memory_limit)
        279     n_trees = forest.n_estimators
        280     V_IJ = _core_computation(
    --> 281         X_train, X_test, inbag, pred_centered, n_trees, memory_constrained, memory_limit
        282     )
        283     V_IJ_unbiased = _bias_correction(V_IJ, inbag, pred_centered, n_trees)
    
    ~/.local/lib/python3.7/site-packages/forestci/forestci.py in _core_computation(X_train, X_test, inbag, pred_centered, n_trees, memory_constrained, memory_limit, test_mode)
        135     """
        136     if not memory_constrained:
    --> 137         return np.sum((np.dot(inbag - 1, pred_centered.T) / n_trees) ** 2, 0)
        138 
        139     if not memory_limit:
    
    <__array_function__ internals> in dot(*args, **kwargs)
    
    ValueError: shapes (90,100) and (100,10,2) not aligned: 100 (dim 1) != 10 (dim 1)
    

    Does anyone have any idea what is going wrong here?? Thanks!

    opened by ljaniurek 1
  • Benchmarking confidence intervals

    Benchmarking confidence intervals

    For my dataset, I tried correlating the CIs to absolute error on the test set, and didn't find a relationship. I do get a relationship if I use the standard deviation of the predictions from individual decision trees. Do you see this with other datasets?

    opened by cyrusmaher 1
  • Can this package be adapted to perform Thompson sampling?

    Can this package be adapted to perform Thompson sampling?

    I’m looking at using random forest regressors to perform hyperparameter tuning in a Bayesian optimization setup. While you can use the upper confidence bound to explore your state space, Thompson sampling performs better and eliminates the need for tuning the hyper-hyperparameter of the confidence interval used for selection. One solution is to obtain an empirical Bayesian posterior by training many random forest regressors on bootstrapped data, but this seems like overkill (ensembles of ensembles!). Would appreciate any input on the subject thank you! (For more discussion see this review of using CART decision trees to pull off the goal: https://arxiv.org/pdf/1706.04687.pdf)

    opened by douglasmason 0
  • Sum taken over wrong axis

    Sum taken over wrong axis

    Hi there,

    I believe the centered predictions are being computed incorrectly. Line 278 in forestci.py takes the average over the predictions, as opposed to the trees. The resulting shape of pred_mean is (forest.n_estimators,) when it should be (X_test.shape[0],). See below:

    https://github.com/scikit-learn-contrib/forest-confidence-interval/blob/6d2a9c285b96bd415ad5ed03f37e517740a47fa2/forestci/forestci.py#L278

    Thanks for the great package otherwise! :)

    opened by bchugg 2
  • ValueError on multiple output problems

    ValueError on multiple output problems

    Training set is of the form (n_training_samples, n_features) = (14175.34) Testing set is of the form (n_testing_samples, n_features) = (4725,34) Running - forestci.random_forest_error(randomFor, X_train, X_test) Yields the following error;

    ValueError Traceback (most recent call last) in 21 print(X_test.shape) 22 mpg_V_IJ_unbiased = forestci.random_forest_error(randomFor, X_train, ---> 23 X_test) 24 hat = randomFor.predict(X_test) 25 print(' The score for is {}'.format(score[-13::]))

    ~\Anaconda3\lib\site-packages\forestci\forestci.py in random_forest_error(forest, X_train, X_test, inbag, calibrate, memory_constrained, memory_limit) 241 n_trees = forest.n_estimators 242 V_IJ = _core_computation(X_train, X_test, inbag, pred_centered, n_trees, --> 243 memory_constrained, memory_limit) 244 V_IJ_unbiased = _bias_correction(V_IJ, inbag, pred_centered, n_trees) 245

    ~\Anaconda3\lib\site-packages\forestci\forestci.py in _core_computation(X_train, X_test, inbag, pred_centered, n_trees, memory_constrained, memory_limit, test_mode) 110 """ 111 if not memory_constrained: --> 112 return np.sum((np.dot(inbag - 1, pred_centered.T) / n_trees) ** 2, 0) 113 114 if not memory_limit:

    <array_function internals> in dot(*args, **kwargs)

    ValueError: shapes (14175,700) and (700,4725,2) not aligned: 700 (dim 1) != 4725 (dim 1)

    opened by IguanasInPyjamas 1
Releases(0.6)
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 02, 2023
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices

Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and t

164 Jan 04, 2023
Binary Classification Problem with Machine Learning

Binary Classification Problem with Machine Learning Solving Approach: 1) Ultimate Goal of the Assignment: This assignment is about solving a binary cl

Dinesh Mali 0 Jan 20, 2022
Automatically create Faiss knn indices with the most optimal similarity search parameters.

It selects the best indexing parameters to achieve the highest recalls given memory and query speed constraints.

Criteo 419 Jan 01, 2023
fastFM: A Library for Factorization Machines

Citing fastFM The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citat

1k Dec 24, 2022
pandas, scikit-learn, xgboost and seaborn integration

pandas, scikit-learn and xgboost integration.

299 Dec 30, 2022
Machine Learning University: Accelerated Natural Language Processing Class

Machine Learning University: Accelerated Natural Language Processing Class This repository contains slides, notebooks and datasets for the Machine Lea

AWS Samples 2k Jan 01, 2023
🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code

Knock Knock A small library to get a notification when your training is complete or when it crashes during the process with two additional lines of co

Hugging Face 2.5k Jan 07, 2023
Kaggle Competition using 15 numerical predictors to predict a continuous outcome.

Kaggle-Comp.-Data-Mining Kaggle Competition using 15 numerical predictors to predict a continuous outcome as part of a final project for a stats data

moisey alaev 1 Dec 28, 2021
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023
MLBox is a powerful Automated Machine Learning python library.

MLBox is a powerful Automated Machine Learning python library. It provides the following features: Fast reading and distributed data preprocessing/cle

Axel 1.4k Jan 06, 2023
Retrieve annotated intron sequences and classify them as minor (U12-type) or major (U2-type)

(intron I nterrogator and C lassifier) intronIC is a program that can be used to classify intron sequences as minor (U12-type) or major (U2-type), usi

Graham Larue 4 Jul 26, 2022
Basic Docker Compose for Machine Learning Purposes

Docker-compose for Machine Learning How to use: cd docker-ml-jupyterlab

Chris Chen 1 Oct 29, 2021
Houseprices - Predict sales prices and practice feature engineering, RFs, and gradient boosting

House Prices - Advanced Regression Techniques Predicting House Prices with Machine Learning This project is build to enhance my knowledge about machin

1 Jan 01, 2022
Bayesian Modeling and Computation in Python

Bayesian Modeling and Computation in Python Open access and Code This repository contains the open access version of the text and the code examples in

Bayesian Modeling and Computation in Python 339 Jan 02, 2023
The Simpsons and Machine Learning: What makes an Episode Great?

The Simpsons and Machine Learning: What makes an Episode Great? Check out my Medium article on this! PROBLEM: The Simpsons has had a decline in qualit

1 Nov 02, 2021
cleanlab is the data-centric ML ops package for machine learning with noisy labels.

cleanlab is the data-centric ML ops package for machine learning with noisy labels. cleanlab cleans labels and supports finding, quantifying, and lear

Cleanlab 51 Nov 28, 2022
Interactive Web App with Streamlit and Scikit-learn that applies different Classification algorithms to popular datasets

Interactive Web App with Streamlit and Scikit-learn that applies different Classification algorithms to popular datasets Datasets Used: Iris dataset,

Samrat Mitra 2 Nov 18, 2021
Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

SDK: Overview of the Kubeflow pipelines service Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on

Kubeflow 3.1k Jan 06, 2023
Tools for Optuna, MLflow and the integration of both.

HPOflow - Sphinx DOC Tools for Optuna, MLflow and the integration of both. Detailed documentation with examples can be found here: Sphinx DOC Table of

Telekom Open Source Software 17 Nov 20, 2022