moDel Agnostic Language for Exploration and eXplanation

Overview

moDel Agnostic Language for Exploration and eXplanation

R build status Coverage Status CRAN_Status_Badge Total Downloads DrWhy-eXtrAI

Python-check Supported Python versions PyPI version Downloads

Overview

Unverified black box model is the path to the failure. Opaqueness leads to distrust. Distrust leads to ignoration. Ignoration leads to rejection.

The DALEX package xrays any model and helps to explore and explain its behaviour, helps to understand how complex models are working. The main function explain() creates a wrapper around a predictive model. Wrapped models may then be explored and compared with a collection of local and global explainers. Recent developents from the area of Interpretable Machine Learning/eXplainable Artificial Intelligence.

The philosophy behind DALEX explanations is described in the Explanatory Model Analysis e-book. The DALEX package is a part of DrWhy.AI universe.

If you work with scikit-learn, keras, H2O, tidymodels, xgboost, mlr or mlr3 in R, you may be interested in the DALEXtra package, which is an extension of DALEX with easy to use explain_*() functions for models created in these libraries.

Additional overview of the dalex Python package is available.

Installation

The DALEX R package can be installed from CRAN

install.packages("DALEX")

The dalex Python package is available on PyPI and conda-forge

pip install dalex -U

conda install -c conda-forge dalex

Learn more

Machine Learning models are widely used and have various applications in classification or regression tasks. Due to increasing computational power, availability of new data sources and new methods, ML models are more and more complex. Models created with techniques like boosting, bagging of neural networks are true black boxes. It is hard to trace the link between input variables and model outcomes. They are use because of high performance, but lack of interpretability is one of their weakest sides.

In many applications we need to know, understand or prove how input variables are used in the model and what impact do they have on final model prediction. DALEX is a set of tools that help to understand how complex models are working.

Resources

R package

Python package

Talks about DALEX

Citation

If you use DALEX in R or dalex in Python, please cite our JMLR papers:

@article{JMLR:v19:18-416,
  author  = {Przemyslaw Biecek},
  title   = {DALEX: Explainers for Complex Predictive Models in R},
  journal = {Journal of Machine Learning Research},
  year    = {2018},
  volume  = {19},
  number  = {84},
  pages   = {1-5},
  url     = {http://jmlr.org/papers/v19/18-416.html}
}

@article{JMLR:v22:20-1473,
  author  = {Hubert Baniecki and
             Wojciech Kretowicz and
             Piotr Piatyszek and 
             Jakub Wisniewski and 
             Przemyslaw Biecek},
  title   = {dalex: Responsible Machine Learning 
             with Interactive Explainability and Fairness in Python},
  journal = {Journal of Machine Learning Research},
  year    = {2021},
  volume  = {22},
  number  = {214},
  pages   = {1-7},
  url     = {http://jmlr.org/papers/v22/20-1473.html}
}

Why

76 years ago Isaac Asimov devised Three Laws of Robotics: 1) a robot may not injure a human being, 2) a robot must obey the orders given it by human beings and 3) A robot must protect its own existence. These laws impact discussion around Ethics of AI. Today’s robots, like cleaning robots, robotic pets or autonomous cars are far from being conscious enough to be under Asimov’s ethics.

Today we are surrounded by complex predictive algorithms used for decision making. Machine learning models are used in health care, politics, education, judiciary and many other areas. Black box predictive models have far larger influence on our lives than physical robots. Yet, applications of such models are left unregulated despite many examples of their potential harmfulness. See Weapons of Math Destruction by Cathy O'Neil for an excellent overview of potential problems.

It's clear that we need to control algorithms that may affect us. Such control is in our civic rights. Here we propose three requirements that any predictive model should fulfill.

  • Prediction's justifications. For every prediction of a model one should be able to understand which variables affect the prediction and how strongly. Variable attribution to final prediction.
  • Prediction's speculations. For every prediction of a model one should be able to understand how the model prediction would change if input variables were changed. Hypothesizing about what-if scenarios.
  • Prediction's validations For every prediction of a model one should be able to verify how strong are evidences that confirm this particular prediction.

There are two ways to comply with these requirements. One is to use only models that fulfill these conditions by design. White-box models like linear regression or decision trees. In many cases the price for transparency is lower performance. The other way is to use approximated explainers – techniques that find only approximated answers, but work for any black box model. Here we present such techniques.

Acknowledgments

Work on this package was financially supported by the NCN Opus grant 2016/21/B/ST6/02176 and NCN Opus grant 2017/27/B/ST6/0130.

Comments
  • tensorflow.python.keras.engine.functional.Functional Exception: Data must be 1-dimensional

    tensorflow.python.keras.engine.functional.Functional Exception: Data must be 1-dimensional

    Hi, I want to use Dalex with a model created from Autkeras, with a self-made Sequential Model it worked. What is the problem?

    Model from Autokeras structured_data_classifier : <tensorflow.python.keras.engine.functional.Functional at 0x1bbab227340> https://autokeras.com/structured_data_classifier/

    explainer = dx.Explainer(
        model = model,
        data = X_test,
        y = y_test,
        model_type= "classification")
    
    #%%
    explainer.model_performance()
    
    explainer.model_parts().plot()
    
    Python 🐍 invalid ❕ 
    opened by blanpa 16
  • How to create an explainer on tuned models with mlr3

    How to create an explainer on tuned models with mlr3

    I have the following code, for parameters tuning with mlr3

    df=readARFF("xerces.arff") task=TaskRegr$new("df", df, target = "price") learner= lrn("regr.rpart") resampling = rsmp("repeated_cv", folds=3, repeats=3) measure= msr("regr.mae")

    search_space = paradox::ParamSet$new( params = list(paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1))) terminator = trm("evals", n_evals = 15)

    tuner = tnr("grid_search")

    at = AutoTuner$new( learner = learner, resampling = resampling, measure = measure, search_space = search_space, terminator = terminator, tuner = tuner, store_tuning_instance = TRUE, store_benchmark_result = TRUE, store_models = TRUE )

    bmr = benchmark(grid,store_models = TRUE)

    My question is how can I create an explainer on this model?

    explainer = explain_mlr3( ??,...........)

    question ❔ R 🐳 
    opened by Nehagupta90 15
  • Error in `predict_surrogate` when `new_observation` has a target value

    Error in `predict_surrogate` when `new_observation` has a target value

    I have used Break-down method for instance level explanation and does work fine. I have never used LIME method and now when I am using it, it gives me the following error:

    Error in [.data.frame(explainer$data, , colnames(new_observation)) : undefined columns selected

    My code is:

    explainer5 = explain_mlr3(model5, data = test[,-21], y = as.numeric(test$report)-1, label="SVM")

    new_observation= test[6,] plot(predict_parts(explainer5, new_observation = new_observation, type = "break_down_interactions")) //// This works fine

    /// Problem is in the following code

    model_type.dalex_explainer <- DALEXtra::model_type.dalex_explainer predict_model.dalex_explainer <- DALEXtra::predict_model.dalex_explainer

    lime_tool <- predict_surrogate(explainer = explainer5, new_observation = new_observation, n_features = 3, n_permutations = 1000, type = "lime")

    Error in [.data.frame(explainer$data, , colnames(new_observation)) : undefined columns selected

    What could be the problem? I am taking help from the example in https://ema.drwhy.ai/LIME.html

    bug 💣 R 🐳 
    opened by Nehagupta90 13
  • How to interpret an instance-level explanation

    How to interpret an instance-level explanation

    Hi I have a bit confusion over a plot of instance-level explanation.. I have to predict something which has a value of 0 to 10, while 10 means a product is very expensive and 0 means it is not expensive at all. I get a prediction of 0.7 (which of course means not expensive) and majority of input features have negative impact (red bars instead of green bar). So my question is how to interpret this explanation. If the value of this input feature will decreases, the prediction will decrease (go from 0.7 to 0.0) or the prediction will increase (goes from 0.7 to higher values such as 10).

    Thank you and hope you have understood my point

    question ❔ 
    opened by Nehagupta90 13
  • Discussion: default for loss_function in model_parts

    Discussion: default for loss_function in model_parts

    I've noticed different results of feature_importance in DALEX and ingredients. It is caused by different default values for loss_function. I was wondering shouldn't they be the same for consistency reasons.

    https://github.com/ModelOriented/DALEX/blob/8858926c4abb5b7cd7d1a16bfd9f6df2434003b0/R/model_parts.R#L43

    https://github.com/ModelOriented/ingredients/blob/9f8f82f05ddb87e4abf2fb20846f097113522a33/R/feature_importance.R#L152

    > library("DALEX")
    > model_titanic_glm <- glm(survived ~ gender + age + fare,
                             data = titanic_imputed, family = "binomial")
    > explain_titanic_glm <- explain(model_titanic_glm,
                                   data = titanic_imputed[,-8],
                                   y = titanic_imputed[,8])
    
    > DALEX::feature_importance(explain_titanic_glm, n_sample = NULL) 
          variable mean_dropout_loss label
    1 _full_model_          370.1496    lm
    2        class          370.1496    lm
    3     embarked          370.1496    lm
    4        sibsp          370.1496    lm
    5        parch          370.1496    lm
    6          age          371.6003    lm
    7         fare          384.4722    lm
    8       gender          548.2130    lm
    9   _baseline_          599.7132    lm
    
    > ingredients::feature_importance(explain_titanic_glm)
          variable mean_dropout_loss label
    1 _full_model_         0.4095316    lm
    2        class         0.4095316    lm
    3     embarked         0.4095316    lm
    4        sibsp         0.4095316    lm
    5        parch         0.4095316    lm
    6          age         0.4101522    lm
    7         fare         0.4173782    lm
    8       gender         0.4976695    lm
    9   _baseline_         0.5178448    lm
    
    > ingredients::feature_importance(explain_titanic_glm, loss_function = DALEX::loss_sum_of_squares)
          variable mean_dropout_loss label
    1 _full_model_          370.1496    lm
    2        class          370.1496    lm
    3     embarked          370.1496    lm
    4        sibsp          370.1496    lm
    5        parch          370.1496    lm
    6          age          371.0894    lm
    7         fare          384.0622    lm
    8       gender          546.5614    lm
    9   _baseline_          598.0323    lm
    

    (Because of https://github.com/ModelOriented/DALEX/issues/175, n_sample = NULL in DALEX::feature_importance)

    R 🐳 
    opened by kasiapekala 13
  • PDP with default and optimized settings

    PDP with default and optimized settings

    Hello

    I want to evaluate the partial dependence profile of a variable with RF model (with default settings) and RF (with optimized settings). Can I do it?

    default_model=model_profile(explainer = my_explainer, variables = "var1") opt_model=model_profile(explainer = my_explainer, variables = "var1")

    question ❔ 
    opened by Nehagupta90 10
  • iBreakDown::local_attributions() throws Error:

    iBreakDown::local_attributions() throws Error: "undefined columns selected"

    I trained 5 models with caret::train. The following code works for svmRadial, gbm and rf, BUT NOT for lm, knn:

    DALEX.explainer <- model %>%
      DALEX::explain(
        data = features,
        y = model$trainingData$.outcome >= UPPER.MIN,
        label = paste(model$method, " model"),
        colorize = TRUE
      )
    
    DALEX.attribution <- DALEX.explainer %>%
      iBreakDown::local_attributions(
        local.obs,
        keep_distributions = TRUE
      )
    

    throws error:

    Error in `[.data.frame`(newdata, , object$method$center, drop = FALSE) : 
      undefined columns selected
    

    In the documentation, I couldn't find any hint that the function is model-dependent. Any clue why this doesn't work for lm and knn??

    opened by agilebean 10
  • plot predict_parts in R gives scatterplot instead of break-down-plot

    plot predict_parts in R gives scatterplot instead of break-down-plot

    Im using DALEX in R, trying to create a break_down plot from a new observation and a XGBoost model.

    I succesfully created the explain object and the predict_parts break_down dataframe, which BTW took ages (more than a day to calculate - anyone have an intuitive idea why it takes so long or how to speed things up?).

    But plotting the predict_parts only gives a scatterplot, I was expecting a nice waterfall-looking plot.

    Im sorry for not being able to post a minimal reproducible example and not being able to share data or model, but here ist the snippet:

    library(DALEX)
      
    explain <- DALEX::explain(trained_model, data = train, y = as.numeric(train$PRUEFERGEBNIS)-1, type = "classification", label = "xgboost", predict_function = 
                                  function(trained_model, test){
                                    previous_na_action <- options('na.action')
                                    options(na.action='na.pass')
                                    sparse_matrix_test <- sparse.model.matrix(PRUEFERGEBNIS ~.
                                                                              , data = test)
                                    options(na.action=previous_na_action$na.action)
                                    
                                    results_test <- predict(trained_model, sparse_matrix_test, type = "response")
                                    round(results_test,3)
                                  })
    
    parts_plot <- predict_parts(explain, new_observation = test[1, , drop = FALSE], type = "break_down")
    plot(parts_plot)
    

    image

    R 🐳 invalid ❕ 
    opened by MJimitater 9
  • Working with case weights

    Working with case weights

    Wonderful package, thanks a lot, I am using it quite frequently.

    Here are some suggestions, especially having models with case weights in mind:

    1. The function model_performance allows to pass a loss function of the residuals (as single argument) only. This seems too restrictive as many loss functions cannot be written as a function of residuals but rather of the observed value and the predicted one. Most DALEX functions allow a general loss function with two arguments, not just one. This inconsitency is e.g. a problem when we want to plot deviance residuals of a logistic regression.

    2. Ability to enhance plots with ggplot functions, i.e. allowing to add e.g. + geom_points(...)

    3. variable_importance: In our case, the custom predict function depends on a weight vector w (basically the raw predictions are multiplied by w). Since the DALEX interface requires this vector to be a column in the data frame passed to explain, w currently appears in the variable_importance output as well. Would it be an option to add an optional "dropVars = NULL" argument to variable_importance (or even in explain)?

    4. Like issue 3, but for the single_prediction part. Here, it is not only an aesthetic problem if a "special" column like case weight w appears.

    feature 💡 long term 📆 R 🐳 
    opened by mayer79 9
  • Did the prescribed: devtools::install_github(

    Did the prescribed: devtools::install_github("pbiecek/DALEX") but Rstudio (latest version under Linux), will not complete the DALEX install. After the above command, R will just hang and nothing happens...

    Did the prescribed: devtools::install_github("pbiecek/DALEX") but Rstudio (latest version under Linux), will not complete the DALEX install.

    After the above command, R will just hang and nothing happens... R will just hang there.

    Waited for 5 minutes, then suspended the installation.

    What am I missing in order to install DALEX ?. (all other R packages have installed with no problems...).

    Thanks!

    opened by sfd99 9
  • Difference between PD profile and CP profile when we manually specify the instances

    Difference between PD profile and CP profile when we manually specify the instances

    Hi

    If I have 100 observations/instances and want to create the PD profile of few important features, then what is the difference between the PD profile of these instances using

    variab= c("var1","var2", "var3" ) pdp <- model_profile(explainer = explainer, variables = variab) plot(pdp)

    and between the CP profile when we manually specify the instances like

    new_observation= data[c(2,7,8,11,15,16,21,24,25,26,30,45,46),] cp<- ceteris_paribus(explainer, new_observation) cp_agg <- aggregate_profiles(cp, variables = variab) plot(cp_agg)

    opened by Nehagupta90 8
  • `loss_accuracy` returns 0 for `mean_dropout_loss`

    `loss_accuracy` returns 0 for `mean_dropout_loss`

    I would like to use loss_accuracy as my loss function in model_parts(), but whenever I use it, the mean_drop_loss is always 0. I have tried loss_accuracy for regression, classification, and multiclass classification (see reprex below). Am I using it correctly?

    library(DALEX)
    #> Welcome to DALEX (version: 2.4.2).
    #> Find examples and detailed introduction at: http://ema.drwhy.ai/
    library(ranger)
    df <- mtcars[, c('mpg', 'cyl', 'disp', 'hp', 'vs')]
    # Regression
    reg <- lm(mpg ~ ., data = df)
    explainer_reg <- explain(reg, data = df[,-1], y = df[,1])
    #> Preparation of a new explainer is initiated
    #>   -> model label       :  lm  (  default  )
    #>   -> data              :  32  rows  4  cols 
    #>   -> target variable   :  32  values 
    #>   -> predict function  :  yhat.lm  will be used (  default  )
    #>   -> predicted values  :  No value for predict function target column. (  default  )
    #>   -> model_info        :  package stats , ver. 4.2.2 , task regression (  default  ) 
    #>   -> predicted values  :  numerical, min =  12.56206 , mean =  20.09062 , max =  27.04625  
    #>   -> residual function :  difference between y and yhat (  default  )
    #>   -> residuals         :  numerical, min =  -4.019038 , mean =  1.010303e-14 , max =  6.976988  
    #>   A new explainer has been created!
    feature_importance(explainer_reg, loss_function = loss_accuracy)
    #>       variable mean_dropout_loss label
    #> 1 _full_model_                 0    lm
    #> 2          cyl                 0    lm
    #> 3         disp                 0    lm
    #> 4           hp                 0    lm
    #> 5           vs                 0    lm
    #> 6   _baseline_                 0    lm
    # Classification
    classif <- glm(vs ~ ., data = df, family = binomial)
    explainer_classif <- explain(classif, data = df[,-5], y = df[,5])
    #> Preparation of a new explainer is initiated
    #>   -> model label       :  lm  (  default  )
    #>   -> data              :  32  rows  4  cols 
    #>   -> target variable   :  32  values 
    #>   -> predict function  :  yhat.glm  will be used (  default  )
    #>   -> predicted values  :  No value for predict function target column. (  default  )
    #>   -> model_info        :  package stats , ver. 4.2.2 , task classification (  default  ) 
    #>   -> predicted values  :  numerical, min =  7.696047e-06 , mean =  0.4375 , max =  0.9920295  
    #>   -> residual function :  difference between y and yhat (  default  )
    #>   -> residuals         :  numerical, min =  -0.9474062 , mean =  -1.483608e-12 , max =  0.5318376  
    #>   A new explainer has been created!
    feature_importance(explainer_classif, loss_function = loss_accuracy)
    #>       variable mean_dropout_loss label
    #> 1 _full_model_                 0    lm
    #> 2          mpg                 0    lm
    #> 3          cyl                 0    lm
    #> 4         disp                 0    lm
    #> 5           hp                 0    lm
    #> 6   _baseline_                 0    lm
    # Multiclass classification
    multiclass <- ranger(cyl ~ ., data = df, probability = TRUE)
    explainer_multiclass <- explain(multiclass, data = df[,-2], y = df[,2])
    #> Preparation of a new explainer is initiated
    #>   -> model label       :  ranger  (  default  )
    #>   -> data              :  32  rows  4  cols 
    #>   -> target variable   :  32  values 
    #>   -> predict function  :  yhat.ranger  will be used (  default  )
    #>   -> predicted values  :  No value for predict function target column. (  default  )
    #>   -> model_info        :  package ranger , ver. 0.14.1 , task multiclass (  default  ) 
    #>   -> model_info        :  Model info detected multiclass task but 'y' is a numeric .  (  WARNING  )
    #>   -> model_info        :  By deafult multiclass tasks supports only factor 'y' parameter. 
    #>   -> model_info        :  Consider changing to a factor vector with true class names.
    #>   -> model_info        :  Otherwise I will not be able to calculate residuals or loss function.
    #>   -> predicted values  :  predict function returns multiple columns:  3  (  default  ) 
    #>   -> residual function :  difference between 1 and probability of true class (  default  )
    #>   -> residuals         :  the residual_function returns an error when executed (  WARNING  ) 
    #>   A new explainer has been created!
    feature_importance(explainer_multiclass, loss_function = loss_accuracy)
    #>       variable mean_dropout_loss  label
    #> 1 _full_model_                 0 ranger
    #> 2          mpg                 0 ranger
    #> 3         disp                 0 ranger
    #> 4           hp                 0 ranger
    #> 5           vs                 0 ranger
    #> 6   _baseline_                 0 ranger
    

    Created on 2022-12-29 with reprex v2.0.2

    When I try other loss functions (e.g., loss_root_mean_square for regression, loss_one_minus_auc for classification), they return non-zero values.

    feature_importance(explainer_reg, loss_function = loss_root_mean_square)
    #>       variable mean_dropout_loss label
    #> 1 _full_model_          2.844520    lm
    #> 2           vs          2.861546    lm
    #> 3           hp          3.328176    lm
    #> 4         disp          4.201312    lm
    #> 5          cyl          4.498485    lm
    #> 6   _baseline_          7.777811    lm
    feature_importance(explainer_classif, loss_function = loss_one_minus_auc)
    #>       variable mean_dropout_loss label
    #> 1 _full_model_        0.03571429    lm
    #> 2          mpg        0.04603175    lm
    #> 3         disp        0.04642857    lm
    #> 4           hp        0.31785714    lm
    #> 5          cyl        0.36031746    lm
    #> 6   _baseline_        0.51884921    lm
    

    Created on 2022-12-29 with reprex v2.0.2

    Is there something different about how loss_accuracy is used?

    I'm using DALEX v2.4.2, R v4.2.2, RStudio v2022.12.0+353, Ubuntu 22.04.1

    R 🐳 invalid ❕ 
    opened by JeffreyRStevens 8
  • Fairness module: why using those specific checks, while omitting others?

    Fairness module: why using those specific checks, while omitting others?

    Hi, I am using your fairness package for my project, and I wondered about your fairness checks:

    For the test equal opportunity ratio, you use TPR, not FNR (the True ratio and not the complementary False ratio). but, For the test "predictive equality ratio" you use FPR, not TNR (the False ratio, not the complementary True ratio).

    I wondered what is the justification for TPR and FPR specifically (and not TPR and TNR for example). Thanks!

    question ❔ 
    opened by ronnee19 1
  • collinearity issue in permutation based feature importance (technical question)

    collinearity issue in permutation based feature importance (technical question)

    I use DALEX in most of my ML research projects. I keep getting a criticism from reviewers regarding a drawback of permutation based method in the presence of multiple correlated predictors. They argue that if there is a group of highly correlated but important predictors, they may not show up at the top of feature importance. Can someone comment on this criticism? How is this addressed in DALEX?

    question ❔ 
    opened by asheetal 7
  • [fastai] 'DataFrame' object has no attribute 'to_frame' with fastai

    [fastai] 'DataFrame' object has no attribute 'to_frame' with fastai

    I'm trying to wrap a fastai tabular learner with DALEX. I got 'DataFrame' object has no attribute 'to_frame' error with dx.Explainer(learn, xs, y, label = "Deep NN"). Any potential problems with this line of code? Thanks!

    feature 💡 question ❔ Python 🐍 
    opened by ming-cui 12
  • python model_parts: Is there show_boxplots parameter?

    python model_parts: Is there show_boxplots parameter?

    In R package show_boxplots = TRUE would plot the permutations in feature importance In python model_parts, this parameter does not seem to exist. Or it is called something else?

    feature 💡 Python 🐍 
    opened by asheetal 1
Releases(python-v1.5.0)
To-Be is a machine learning challenge on CodaLab Platform about Mortality Prediction

To-Be is a machine learning challenge on CodaLab Platform about Mortality Prediction. The challenge aims to adress the problems of medical imbalanced data classification.

Marwan Mashra 1 Jan 31, 2022
MLOps pipeline project using Amazon SageMaker Pipelines

This project shows steps to build an end to end MLOps architecture that covers data prep, model training, realtime and batch inference, build model registry, track lineage of artifacts and model drif

AWS Samples 3 Sep 16, 2022
Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale.

Model Search Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers sp

AriesTriputranto 1 Dec 13, 2021
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 09, 2023
scikit-learn models hyperparameters tuning and feature selection, using evolutionary algorithms.

Sklearn-genetic-opt scikit-learn models hyperparameters tuning and feature selection, using evolutionary algorithms. This is meant to be an alternativ

Rodrigo Arenas 180 Dec 20, 2022
Kaggler is a Python package for lightweight online machine learning algorithms and utility functions for ETL and data analysis.

Kaggler is a Python package for lightweight online machine learning algorithms and utility functions for ETL and data analysis. It is distributed under the MIT License.

Jeong-Yoon Lee 720 Dec 25, 2022
Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm.

Naive-Bayes Spam Classificator Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm. Main goal is to code a

Viktoria Maksymiuk 1 Jun 27, 2022
PyHarmonize: Adding harmony lines to recorded melodies in Python

PyHarmonize: Adding harmony lines to recorded melodies in Python About To use this module, the user provides a wav file containing a melody, the key i

Julian Kappler 2 May 20, 2022
fMRIprep Pipeline To Machine Learning

fMRIprep Pipeline To Machine Learning(Demo) 所有配置均在config.py文件下定义 前置环境(lilab) 各个节点均安装docker,并有fmripre的镜像 可以使用conda中的base环境(相应的第三份包之后更新) 1. fmriprep scr

Alien 3 Mar 08, 2022
PyPOTS - A Python Toolbox for Data Mining on Partially-Observed Time Series

A python toolbox/library for data mining on partially-observed time series, supporting tasks of forecasting/imputation/classification/clustering on incomplete multivariate time series with missing va

Wenjie Du 179 Dec 31, 2022
MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees.

MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees. MooGBT optimizes for multiple objectives by defining constraints on sub-objective(s) along with a primary objective. Th

Swiggy 66 Dec 06, 2022
Extended Isolation Forest for Anomaly Detection

Table of contents Extended Isolation Forest Summary Motivation Isolation Forest Extension The Code Installation Requirements Use Citation Releases Ext

Sahand Hariri 377 Dec 18, 2022
Python ML pipeline that showcases mltrace functionality.

mltrace tutorial Date: October 2021 This tutorial builds a training and testing pipeline for a toy ML prediction problem: to predict whether a passeng

Log Labs 28 Nov 09, 2022
Machine Learning Model to predict the payment date of an invoice when it gets created in the system.

Payment-Date-Prediction Machine Learning Model to predict the payment date of an invoice when it gets created in the system.

15 Sep 09, 2022
ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

Broad Institute 65 Dec 20, 2022
Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters

Skforecast is a python library that eases using scikit-learn regressors as multi-step forecasters. It also works with any regressor compatible with the scikit-learn API (pipelines, CatBoost, LightGBM

Joaquín Amat Rodrigo 297 Jan 09, 2023
Mixing up the Invariant Information clustering architecture, with self supervised concepts from SimCLR and MoCo approaches

Self Supervised clusterer Combined IIC, and Moco architectures, with some SimCLR notions, to get state of the art unsupervised clustering while retain

Bendidi Ihab 9 Feb 13, 2022
SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow

SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations and workloads.

Tangram makes it easy for programmers to train, deploy, and monitor machine learning models.

Tangram Website | Discord Tangram makes it easy for programmers to train, deploy, and monitor machine learning models. Run tangram train to train a mo

Tangram 1.4k Jan 05, 2023
Warren - Stock Price Predictor

Web app to predict closing stock prices in real time using Facebook's Prophet time series algorithm with a multi-variate, single-step time series forecasting strategy.

Kumar Nityan Suman 153 Jan 03, 2023