hgboost - Hyperoptimized Gradient Boosting

Overview

hgboost - Hyperoptimized Gradient Boosting

Python PyPI Version License Github Forks GitHub Open Issues Project Status Downloads Downloads Sphinx Open In Colab BuyMeCoffee DOI

Star it if you like it!

hgboost is short for Hyperoptimized Gradient Boosting and is a python package for hyperparameter optimization for xgboost, catboost and lightboost using cross-validation, and evaluating the results on an independent validation set. hgboost can be applied for classification and regression tasks.

hgboost is fun because:

* 1. Hyperoptimization of the Parameter-space using bayesian approach.
* 2. Determines the best scoring model(s) using k-fold cross validation.
* 3. Evaluates best model on independent evaluation set.
* 4. Fit model on entire input-data using the best model.
* 5. Works for classification and regression
* 6. Creating a super-hyperoptimized model by an ensemble of all individual optimized models.
* 7. Return model, space and test/evaluation results.
* 8. Makes insightful plots.

Documentation

Regression example Open regression example In Colab

Classification example Open classification example In Colab

Schematic overview of hgboost

Installation Environment

  • Install hgboost from PyPI (recommended). hgboost is compatible with Python 3.6+ and runs on Linux, MacOS X and Windows.
  • A new environment is recommended and created as following:
conda create -n env_hgboost python=3.6
conda activate env_hgboost

Install newest version hgboost from pypi

pip install hgboost

Force to install latest version

pip install -U hgboost

Install from github-source

pip install git+https://github.com/erdogant/hgboost#egg=master

Import hgboost package

import hgboost as hgboost

Classification example for xgboost, catboost and lightboost:

# Load library
from hgboost import hgboost

# Initialization
hgb = hgboost(max_eval=10, threshold=0.5, cv=5, test_size=0.2, val_size=0.2, top_cv_evals=10, random_state=42)
# Import data
df = hgb.import_example()
y = df['Survived'].values
y = y.astype(str)
y[y=='1']='survived'
y[y=='0']='dead'

# Preprocessing by encoding variables
del df['Survived']
X = hgb.preprocessing(df)
# Fit catboost by hyperoptimization and cross-validation
results = hgb.catboost(X, y, pos_label='survived')

# Fit lightboost by hyperoptimization and cross-validation
results = hgb.lightboost(X, y, pos_label='survived')

# Fit xgboost by hyperoptimization and cross-validation
results = hgb.xgboost(X, y, pos_label='survived')

# [hgboost] >Start hgboost classification..
# [hgboost] >Collecting xgb_clf parameters.
# [hgboost] >Number of variables in search space is [11], loss function: [auc].
# [hgboost] >method: xgb_clf
# [hgboost] >eval_metric: auc
# [hgboost] >greater_is_better: True
# [hgboost] >pos_label: True
# [hgboost] >Total dataset: (891, 204) 
# [hgboost] >Hyperparameter optimization..
#  100% |----| 500/500 [04:39<05:21,  1.33s/trial, best loss: -0.8800619834710744]
# [hgboost] >Best performing [xgb_clf] model: auc=0.881198
# [hgboost] >5-fold cross validation for the top 10 scoring models, Total nr. tests: 50
# 100%|██████████| 10/10 [00:42<00:00,  4.27s/it]
# [hgboost] >Evalute best [xgb_clf] model on independent validation dataset (179 samples, 20.00%).
# [hgboost] >[auc] on independent validation dataset: -0.832
# [hgboost] >Retrain [xgb_clf] on the entire dataset with the optimal parameters settings.
# Plot searched parameter space 
hgb.plot_params()

# Plot summary results
hgb.plot()

# Plot the best tree
hgb.treeplot()

# Plot the validation results
hgb.plot_validation()

# Plot the cross-validation results
hgb.plot_cv()

# use the learned model to make new predictions.
y_pred, y_proba = hgb.predict(X)

Create ensemble model for Classification

from hgboost import hgboost

hgb = hgboost(max_eval=100, threshold=0.5, cv=5, test_size=0.2, val_size=0.2, top_cv_evals=10, random_state=None, verbose=3)

# Import data
df = hgb.import_example()
y = df['Survived'].values
del df['Survived']
X = hgb.preprocessing(df, verbose=0)

results = hgb.ensemble(X, y, pos_label=1)

# use the predictor
y_pred, y_proba = hgb.predict(X)

Create ensemble model for Regression

from hgboost import hgboost

hgb = hgboost(max_eval=100, threshold=0.5, cv=5, test_size=0.2, val_size=0.2, top_cv_evals=10, random_state=None, verbose=3)

# Import data
df = hgb.import_example()
y = df['Age'].values
del df['Age']
I = ~np.isnan(y)
X = hgb.preprocessing(df, verbose=0)
X = X.loc[I,:]
y = y[I]

results = hgb.ensemble(X, y, methods=['xgb_reg','ctb_reg','lgb_reg'])

# use the predictor
y_pred, y_proba = hgb.predict(X)
# Plot the ensemble classification validation results
hgb.plot_validation()

References

* http://hyperopt.github.io/hyperopt/
* https://github.com/dmlc/xgboost
* https://github.com/microsoft/LightGBM
* https://github.com/catboost/catboost

Maintainers

Contribute

  • Contributions are welcome.

Licence See LICENSE for details.

Coffee

  • If you wish to buy me a Coffee for this work, it is very appreciated :)
Comments
  • import error during import hgboost

    import error during import hgboost

    When I finished installation of hgboost and try to import hgboost,there is something wrong,could you please help me out? Details are as follows:

    ImportError Traceback (most recent call last) in ----> 1 from hgboost import hgboost

    C:\ProgramData\Anaconda3\lib\site-packages\hgboost_init_.py in ----> 1 from hgboost.hgboost import hgboost 2 3 from hgboost.hgboost import ( 4 import_example, 5 )

    C:\ProgramData\Anaconda3\lib\site-packages\hgboost\hgboost.py in 9 import classeval as cle 10 from df2onehot import df2onehot ---> 11 import treeplot as tree 12 import colourmap 13

    C:\ProgramData\Anaconda3\lib\site-packages\treeplot_init_.py in ----> 1 from treeplot.treeplot import ( 2 plot, 3 randomforest, 4 xgboost, 5 lgbm,

    C:\ProgramData\Anaconda3\lib\site-packages\treeplot\treeplot.py in 14 import numpy as np 15 from sklearn.tree import export_graphviz ---> 16 from sklearn.tree.export import export_text 17 from subprocess import call 18 import matplotlib.image as mpimg

    ImportError: cannot import name 'export_text' from 'sklearn.tree.export'

    thanks a lot!

    opened by recherHE 3
  • Test:Validation:Train split

    Test:Validation:Train split

    Shouldn't be the new test-train split be test_size=self.test_size/(1-self.val_size) in def _HPOpt(self):. We updated the shape of X in _set_validation_set(self, X, y)

    I'm assuming that the test, train, and validation set ratios are defined on the original data.

    opened by SSLPP 3
  • Treeplot failure - missing graphviz dependency

    Treeplot failure - missing graphviz dependency

    I'm running through the example classification notebook now, and the treeplot fails to render, with the following warning:

    Screen Shot 2022-10-04 at 14 30 21

    It seems that graphviz being a compiled c library is not bundled in pip (it is included in conda install treeplot/graphviz though).

    Since we have no recourse to add this to pip requirements, maybe a sentence in the Instalation instructions warning that graphviz must already be available and/or installed separately.

    (note the suggested apt command for linux is not entirely necessary, because pydot does get installed with treeplot via pip)

    opened by ninjit 2
  • Getting the native model for compatibility with shap.TreeExplainer

    Getting the native model for compatibility with shap.TreeExplainer

    Hello, first of all really nice project. I've just found out about it today and started playing with it a little bit. Is there any way to get the trained model as an XGBoost, LightGBM or CatBoost class in order to fit a shap.TreeExplainer instance to it?

    Thanks in advance! -Nicolás

    opened by nicolasaldecoa 2
  • Xgboost parameter

    Xgboost parameter

    After using the code hgb.plot_params(), the parameter of learning rate is 796. I don't think it's reasonable. Can I see the model parameters optimized by using HyperOptimized parameters?

    QQ截图20210705184733

    opened by LAH19999 2
  • HP Tuning: best_model uses different parameters from those that were reported as best ones

    HP Tuning: best_model uses different parameters from those that were reported as best ones

    I used hgboost for optimizing the hyper-parameters of my XGBoost model as described in the API References with the following parameters:

    hgb = hgboost()
    results = hgb.xgboost(X_train, y_train, pos_label=1, method='xgb_clf', eval_metric='logloss')
    

    As noted in the documentation, results is a dictionary that, among other things, returns the best performing parameters (best_params) and the best performing model (model). However, the parameters that the best performing model uses are different from what the function returns as best_params:

    best_params

    'params': {'colsample_bytree': 0.47000000000000003,
      'gamma': 1,
      'learning_rate': 534,
      'max_depth': 49,
      'min_child_weight': 3.0,
      'n_estimators': 36,
      'subsample': 0.96}
    

    model

    'model': XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
                   colsample_bynode=1, colsample_bytree=0.47000000000000003,
                   enable_categorical=False, gamma=1, gpu_id=-1,
                   importance_type=None, interaction_constraints='',
                   learning_rate=0.058619090164329916, max_delta_step=0,
                   max_depth=54, min_child_weight=3.0, missing=nan,
                   monotone_constraints='()', n_estimators=200, n_jobs=-1,
                   num_parallel_tree=1, predictor='auto', random_state=0,
                   reg_alpha=0, reg_lambda=1, scale_pos_weight=0.5769800646551724,
                   subsample=0.96, tree_method='exact', validate_parameters=1,
                   verbosity=0),
    

    As you can see, for example, max_depth=49 in the best_params, but the model uses max_depth=54 etc.

    Is this a bug or the intended behavior? In case of the latter, I'd really appreciate an explanation!

    My setup:

    • OS: WSL (Ubuntu)
    • Python: 3.9.7
    • hgboost: 1.0.0
    opened by Mikki99 1
  • Running regression example error

    Running regression example error

    opened by recherHE 1
  • Error in rmse calculaiton

    Error in rmse calculaiton

    if self.eval_metric=='rmse':
                    loss = mean_squared_error(y_test, y_pred)
    

    mean_squared_error in sklearn gives mse, use mean_squared_error(y_true, y_pred, squared=False) for rmse

    opened by SSLPP 1
  • numpy.AxisError: axis 1 is out of bounds for array of dimension 1

    numpy.AxisError: axis 1 is out of bounds for array of dimension 1

    When eval_metric is auc, it raises an error. The related line is hgboost.py:906 and the related issue is: https://stackoverflow.com/questions/61288972/axiserror-axis-1-is-out-of-bounds-for-array-of-dimension-1-when-calculating-auc

    opened by quancore 0
  • ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].

    ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].

    There is an error when f1 score is used for multı-class classification. The error of line is on hgboost.py:904 while calculating f1 score, average param default is binary which is not suitable for multi-class.

    opened by quancore 0
Releases(1.1.3)
MLOps pipeline project using Amazon SageMaker Pipelines

This project shows steps to build an end to end MLOps architecture that covers data prep, model training, realtime and batch inference, build model registry, track lineage of artifacts and model drif

AWS Samples 3 Sep 16, 2022
End to End toy example of MLOps

churn_model MLOps Toy Example End to End You might find below links useful Connect VSCode to Git MLFlow Port Heroku App Project Organization ├── LICEN

Ashish Tele 6 Feb 06, 2022
Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.

Toolkit for Building Robust ML models that generalize to unseen domains (RobustDG) Divyat Mahajan, Shruti Tople, Amit Sharma Privacy & Causal Learning

Microsoft 149 Jan 06, 2023
pure-predict: Machine learning prediction in pure Python

pure-predict speeds up and slims down machine learning prediction applications. It is a foundational tool for serverless inference or small batch prediction with popular machine learning frameworks l

Ibotta 84 Dec 29, 2022
neurodsp is a collection of approaches for applying digital signal processing to neural time series

neurodsp is a collection of approaches for applying digital signal processing to neural time series, including algorithms that have been proposed for the analysis of neural time series. It also inclu

NeuroDSP 224 Dec 02, 2022
Apache (Py)Spark type annotations (stub files).

PySpark Stubs A collection of the Apache Spark stub files. These files were generated by stubgen and manually edited to include accurate type hints. T

Maciej 114 Nov 22, 2022
Apple-voice-recognition - Machine Learning

Apple-voice-recognition Machine Learning How does Siri work? Siri is based on large-scale Machine Learning systems that employ many aspects of data sc

Harshith VH 1 Oct 22, 2021
This is the code repository for Interpretable Machine Learning with Python, published by Packt.

Interpretable Machine Learning with Python, published by Packt

Packt 299 Jan 02, 2023
Bayesian Modeling and Computation in Python

Bayesian Modeling and Computation in Python Open access and Code This repository contains the open access version of the text and the code examples in

Bayesian Modeling and Computation in Python 339 Jan 02, 2023
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.

Hivemind: decentralized deep learning in PyTorch Hivemind is a PyTorch library to train large neural networks across the Internet. Its intended usage

1.3k Jan 08, 2023
Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...

Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...

Thoughtworks 318 Jan 02, 2023
Scikit-Learn useful pre-defined Pipelines Hub

Scikit-Pipes Scikit-Learn useful pre-defined Pipelines Hub Usage: Install scikit-pipes It's advised to install sklearn-genetic using a virtual env, in

Rodrigo Arenas 1 Apr 26, 2022
Confidence intervals for scikit-learn forest algorithms

forest-confidence-interval: Confidence intervals for Forest algorithms Forest algorithms are powerful ensemble methods for classification and regressi

272 Dec 01, 2022
vortex particles for simulating smoke in 2d

vortex-particles-method-2d vortex particles for simulating smoke in 2d -vortexparticles_s

12 Aug 23, 2022
TensorFlow Decision Forests (TF-DF) is a collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.

TensorFlow Decision Forests (TF-DF) is a collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models. The library is a collection of Keras models

538 Jan 01, 2023
Reggy - Regressions with arbitrarily complex regularization terms

reggy Regressions with arbitrarily complex regularization terms. Currently suppo

Kim 1 Jan 20, 2022
AutoX是一个高效的自动化机器学习工具,它主要针对于表格类型的数据挖掘竞赛。 它的特点包括: 效果出色、简单易用、通用、自动化、灵活。

English | 简体中文 AutoX是什么? AutoX一个高效的自动化机器学习工具,它主要针对于表格类型的数据挖掘竞赛。 它的特点包括: 效果出色: AutoX在多个kaggle数据集上,效果显著优于其他解决方案(见效果对比)。 简单易用: AutoX的接口和sklearn类似,方便上手使用。

4Paradigm 431 Dec 28, 2022
Simple Machine Learning Tool Kit

Getting started smltk (Simple Machine Learning Tool Kit) package is implemented for helping your work during data preparation testing your model The g

Alessandra Bilardi 1 Dec 30, 2021
scikit-multimodallearn is a Python package implementing algorithms multimodal data.

scikit-multimodallearn is a Python package implementing algorithms multimodal data. It is compatible with scikit-learn, a popul

12 Jun 29, 2022
Causal Inference and Machine Learning in Practice with EconML and CausalML: Industrial Use Cases at Microsoft, TripAdvisor, Uber

Causal Inference and Machine Learning in Practice with EconML and CausalML: Industrial Use Cases at Microsoft, TripAdvisor, Uber

EconML/CausalML KDD 2021 Tutorial 124 Dec 28, 2022