ThunderGBM: Fast GBDTs and Random Forests on GPUs

Overview

Documentation Status GitHub license GitHub issues PyPI version Downloads

Documentations | Installation | Parameters | Python (scikit-learn) interface

What's new?

ThunderGBM won 2019 Best Paper Award from IEEE Transactions on Parallel and Distributed Systems by the IEEE Computer Society Publications Board (1 out of 987 submissions, for the work "Zeyi Wen^, Jiashuai Shi*, Bingsheng He, Jian Chen, Kotagiri Ramamohanarao, and Qinbin Li*, Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training , IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 12, 2019, pp. 2706-2717."). see more details: Best Paper Award Winners from IEEE, News from NUS School of Computing

Overview

The mission of ThunderGBM is to help users easily and efficiently apply GBDTs and Random Forests to solve problems. ThunderGBM exploits GPUs to achieve high efficiency. Key features of ThunderGBM are as follows.

  • Often by 10x times over other libraries.
  • Support Python (scikit-learn) interfaces.
  • Supported Operating System(s): Linux and Windows.
  • Support classification, regression and ranking.

Why accelerate GBDT and Random Forests: A survey conducted by Kaggle in 2017 shows that 50%, 46% and 24% of the data mining and machine learning practitioners are users of Decision Trees, Random Forests and GBMs, respectively.

GBDTs and Random Forests are often used for creating state-of-the-art data science solutions. We've listed three winning solutions using GBDTs below. Please check out the XGBoost website for more winning solutions and use cases. Here are some example successes of GDBTs and Random Forests:

Getting Started

Prerequisites

  • cmake 2.8 or above
    • gcc 4.8 or above for Linux | CUDA 9 or above
    • Visual C++ for Windows | CUDA 10

Quick Install

  • For Linux with CUDA 9.0

    • pip install thundergbm
  • For Windows (64bit)

    • Download the Python wheel file (for Python3 or above)

    • Install the Python wheel file

      • pip install thundergbm-0.3.4-py3-none-win_amd64.whl
  • Currently only support python3

  • After you have installed thundergbm, you can import and use the classifier (similarly for regressor) by:

from thundergbm import TGBMClassifier
clf = TGBMClassifier()
clf.fit(x, y)

Build from source

git clone https://github.com/zeyiwen/thundergbm.git
cd thundergbm
#under the directory of thundergbm
git submodule init cub && git submodule update

Build on Linux (build instructions for Windows)

#under the directory of thundergbm
mkdir build && cd build && cmake .. && make -j

Quick Start

./bin/thundergbm-train ../dataset/machine.conf
./bin/thundergbm-predict ../dataset/machine.conf

You will see RMSE = 0.489562 after successful running.

MacOS is not supported, as Apple has suspended support for some NVIDIA GPUs. We will consider supporting MacOS based on our user community feedbacks. Please stay tuned.

How to cite ThunderGBM

If you use ThunderGBM in your paper, please cite our work (TPDS and JMLR).

@ARTICLE{8727750,
  author={Z. {Wen} and J. {Shi} and B. {He} and J. {Chen} and K. {Ramamohanarao} and Q. {Li}},
  journal={IEEE Transactions on Parallel and Distributed Systems}, 
  title={Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training}, 
  year={2019},
  volume={30},
  number={12},
  pages={2706-2717},
  }

@article{wenthundergbm19,
 author = {Wen, Zeyi and Shi, Jiashuai and He, Bingsheng and Li, Qinbin and Chen, Jian},
 title = {{ThunderGBM}: Fast {GBDTs} and Random Forests on {GPUs}},
 journal = {Journal of Machine Learning Research},
 volume={21},
 year = {2020}
}

Related papers

  • Zeyi Wen, Jiashuai Shi, Bingsheng He, Jian Chen, Kotagiri Ramamohanarao and Qinbin Li. Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training. IEEE Transactions on Parallel and Distributed Systems (TPDS), accepted in May 2019. pdf

  • Zeyi Wen, Hanfeng Liu, Jiashuai Shi, Qinbin Li, Bingsheng He, Jian Chen. ThunderGBM: Fast GBDTs and Random Forests on GPUs. Featured at JMLR MLOSS (Machine Learning Open Source Software). Year: 2020, Volume: 21, Issue: 108, Pages: 1−5. pdf

  • Zeyi Wen, Bingsheng He, Kotagiri Ramamohanarao, Shengliang Lu, and Jiashuai Shi. Efficient Gradient Boosted Decision Tree Training on GPUs. The 32nd IEEE Intern ational Parallel and Distributed Processing Symposium (IPDPS), pages 234-243, 2018. pdf

Key members of ThunderGBM

  • Zeyi Wen, NUS (now at The University of Western Australia)
  • Hanfeng Liu, GDUFS (a visting student at NUS)
  • Jiashuai Shi, SCUT (a visiting student at NUS)
  • Qinbin Li, NUS
  • Advisor: Bingsheng He, NUS
  • Collaborators: Jian Chen (SCUT)

Other information

  • This work is supported by a MoE AcRF Tier 2 grant (MOE2017-T2-1-122) and an NUS startup grant in Singapore.

Related libraries

Comments
  • build boost library error(win10)

    build boost library error(win10)

    python2.7.15

    boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_coroutine-1.70.0\boost_coroutine-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_coroutine-1.70.0\boost_coroutine-config-version.cmake boost-install.generate-cmake-variant- D:\boost\boost_1_70_0\lib\cmake\boost_coroutine-1.70.0\libboost_coroutine-variant-vc140-mt-gd-x32-1_70-shared.cmake boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_thread-1.70.0\boost_thread-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_thread-1.70.0\boost_thread-config-version.cmake boost-install.generate-cmake-variant- D:\boost\boost_1_70_0\lib\cmake\boost_thread-1.70.0\libboost_thread-variant-vc140-mt-gd-x32-1_70-shared.cmake boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_date_time-1.70.0\boost_date_time-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_date_time-1.70.0\boost_date_time-config-version.cmake boost-install.generate-cmake-variant- D:\boost\boost_1_70_0\lib\cmake\boost_date_time-1.70.0\libboost_date_time-variant-vc140-mt-gd-x32-1_70-shared.cmake boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_exception-1.70.0\boost_exception-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_exception-1.70.0\boost_exception-config-version.cmake ...failed updating 2949 targets... ...skipped 16 targets... ...updated 12449 targets...

    opened by lvpinrui 22
  • There was an error using thundergbm-predict file

    There was an error using thundergbm-predict file

    My environment is: Windows 10 Visual Studio 2017 Community CMake3.14.0-rc3 CUDA10.1.105 Using test_dataset.txt in the library is successful,but Using my own .txt is failed. my thundergbm-predict: image

    When building thundergbm-predict : image

    opened by lvpinrui 13
  • Make error

    Make error

    Hi, I am getting the following error when calling make -j

    [  3%] Building CXX object src/thundergbm/CMakeFiles/thundergbm.dir/objective/ranking_obj.cpp.o
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp: In member function ‘virtual void LambdaRank::get_gradient(const SyncArray<float>&, const SyncArray<float>&, SyncArray<GHPair>&)’:
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:50:14: error: ‘mt19937’ is not a member of ‘std’
             std::mt19937 gen(std::rand());
                  ^~~~~~~
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:50:14: note: suggested alternative:
    In file included from /usr/include/c++/7/tr1/random:47:0,
                     from /usr/include/c++/7/parallel/random_number.h:36,
                     from /usr/include/c++/7/parallel/partition.h:38,
                     from /usr/include/c++/7/parallel/quicksort.h:36,
                     from /usr/include/c++/7/parallel/sort.h:48,
                     from /usr/include/c++/7/parallel/algo.h:45,
                     from /usr/include/c++/7/parallel/algorithm:37,
                     from /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:6:
    /usr/include/c++/7/tr1/random.h:701:7: note:   ‘std::tr1::mt19937’
         > mt19937;
           ^~~~~~~
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:61:33: error: ‘gen’ was not declared in this scope
                         int m = dis(gen);
                                     ^~~
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:61:33: note: suggested alternative: ‘len’
                         int m = dis(gen);
                                     ^~~
                                     len
    src/thundergbm/CMakeFiles/thundergbm.dir/build.make:11442: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/objective/ranking_obj.cpp.o' failed
    make[2]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/objective/ranking_obj.cpp.o] Error 1
    CMakeFiles/Makefile2:131: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/all' failed
    make[1]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/all] Error 2
    Makefile:83: recipe for target 'all' failed
    make: *** [all] Error 2
    
    
    opened by psinger 13
  • build thundergbm

    build thundergbm

    src/thundergbm/CMakeFiles/thundergbm.dir/build.make:147: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/thundergbm_generated_tree.cu.o' failed ...

    I have download the latest version of ThunderGBM, but it still can't make it. can you tell me how to fix it? Thank you very much!

    opened by mmobai 10
  • "File Modification Detected" and errors in Visual Studio

    downloaded the branch with cuda 11 support, for win10 and followed the instructions. When building the thundergbm.sln file in Visual Studio, I get a message that says "File Modification Detected". When choosing "Reload All" or any of the other options, the building ends up with a bunch of errors and bugs. Please see the pictures below for more details.

    These are the steps I took before getting to the building process: git clone -b support_cuda11 --single-branch https://github.com/zeyiwen/thundergbm.git cd thundergbm git submodule init cub && git submodule update cd thundergbm mkdir build cd build cmake .. -DCMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=TRUE -DBUILD_SHARED_LIBS=TRUE -G "Visual Studio 16 2019" the last step is to open the file thundergbm.sln and click on "build solution" in the "Build" menu in Visual Studio.

    Here is the message window.

    File Modification Detected

    Down below is the errors generated by the building process, in the files xutility, xmemory, atomic and LINK.

    errors thundergbm

    Any idea what might be causing this?

    opened by AlanSpencer2 7
  • Bug: Memory Leak in sci-kit interface

    Bug: Memory Leak in sci-kit interface

    Hello,

    I'm running TGBM on a Windows 10 machine with Cuda 10. In my use case I create a lot of different GBDTs, which go out of scope after some time. The problem is, that the gpu memory stays allocated.

    My expectation were, that when pythons garbage collector deletes the instance, the memory gets freed. However, even when I explicitly delete the instance, the memory stays allocated.

    I think to resolve this Issue in "scikit_tgbm.cpp" the model_free function must be extended to free the allocated memory. Currently, I don't know how to do this. But maybe if I find time I will take a deeper look in memory management with Cuda.

    I found a very very ugly workaround by changing some code in the "thundergbm.py". I extended the del function of TGBMModel to reload the whole dll. This isn't a good solution but okayish as a workaround if really required by somebody.

    opened by Tripton 7
  • Building from source on ubuntu18.04 with CUDA11.0

    Building from source on ubuntu18.04 with CUDA11.0

    Hi, I am trying to build thundergbm on ubuntu18.04 with CUDA 11.0 using the instructions here. While building the binary, I get a string of warnings about C++14 (from CUB and THRUST), but the build proceeds until it hits this error:

    thundergbm/src/thundergbm/sparse_columns.cu(52): error: identifier "cusparseScsr2csc" is undefined 1 error detected in the compilation of "thundergbm/src/thundergbm/sparse_columns.cu". CMake Error at thundergbm_generated_sparse_columns.cu.o.Release.cmake:279 (message): Error generating file thundergbm/build/src/thundergbm/CMakeFiles/thundergbm.dir//./thundergbm_generated_sparse_columns.cu.o src/thundergbm/CMakeFiles/thundergbm.dir/build.make:9146: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/thundergbm_generated_sparse_columns.cu.o' failed make[2]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/thundergbm_generated_sparse_columns.cu.o] Error 1 CMakeFiles/Makefile2:126: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/all' failed make[1]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    The line that is seemingly causing the problem (sparse_columns.cu(52)) is

    usparseScsr2csc(handle, dataset.n_instances(), n_column, nnz, val.device_data(), row_ptr.device_data(), col_idx.device_data(), csc_val.device_data(), csc_row_idx.device_data(), csc_col_ptr.device_data(), CUSPARSE_ACTION_NUMERIC, CUSPARSE_INDEX_BASE_ZERO);

    Any suggestions on how to get around this?

    Thanks.

    call for contribution 
    opened by sensharma 6
  • CUDA Error 29: driver shutting down

    CUDA Error 29: driver shutting down

    I get the following error at the end of my training for TGBMRegressor with default parameters, this doesn't allow the python process to release the GPU memory and it stays there.

    2019-06-27 16:15:43,952 INFO [default] #instances = 1280, #features = 729 2019-06-27 16:15:44,031 INFO [default] copy csr matrix to GPU 2019-06-27 16:15:44,220 INFO [default] converting csr matrix to csc matrix 2019-06-27 16:15:44,483 INFO [default] Getting cut points... 2019-06-27 16:15:44,485 INFO [default] ################>>>: 1 2019-06-27 16:15:44,491 INFO [default] ----------------->>> Last LOOP: 0.0072718 2019-06-27 16:15:44,494 INFO [default] TOTAL CP:152949 2019-06-27 16:15:45,617 INFO [default] RMSE = 3.72949 2019-06-27 16:15:45,645 INFO [default] RMSE = 3.3029 2019-06-27 16:15:45,672 INFO [default] RMSE = 2.93213 2019-06-27 16:15:45,673 INFO [default] training time = 1.17665 2019-06-27 16:15:45,711 INFO [default] #instances = 549, #features = 729 CUDA error 29 [D:\thundergbm\src\thundergbm\syncmem.cpp, 576]: driver shutting down CUDA error 29 [D:\thundergbm\cub\cub/util_allocator.cuh, 657]: driver shutting down

    enhancement 
    opened by talhachattha 6
  • Make error

    Make error

    I am getting this error after executing make -j:

    /thundergbm/src/thundergbm/hist_cut.cu(142): error: a reference of type "SyncArray<float> &" (not const-qualified) cannot be initialized with a value of type "SyncArray<float_type>"
    

    I am using the code of the support_cuda11 branch (as my CUDA version is 11).

    opened by mahi045 5
  • FATAL [default] Check failed: [error == cudaSuccess]  out of memory

    FATAL [default] Check failed: [error == cudaSuccess] out of memory

    My system is:

    (_env) D:_env\project\series>nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 442.19 Driver Version: 442.19 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+=================| | 0 GeForce RTX 2070 WDDM | 00000000:01:00.0 Off | N/A | | N/A 46C P8 7W / N/A | 219MiB / 8192MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |==============================================| | 0 7852 C+G ...xperience\NVIDIA GeForce Experience.exe N/A | +-----------------------------------------------------------------------------+

    I am on Windows 10

    When trying to run the TGBMClassifier I get the following error:

    2020-02-22 20:13:15,901 INFO [default] #instances = 20289, #features = 40 2020-02-22 20:13:16,503 INFO [default] convert csr to csc using gpu... 2020-02-22 20:13:16,878 INFO [default] Converting csr to csc using time: 0.363863 s 2020-02-22 20:13:16,878 INFO [default] Fast getting cut points... 2020-02-22 20:13:16,878 FATAL [default] Check failed: [error == cudaSuccess] out of memory 2020-02-22 20:13:16,894 WARNING [default] Aborting application. Reason: Fatal log at [D:_env\project\series\thundergbm\src\thundergbm\syncmem.cpp:107]

    Any suggestion how to fix this?

    opened by Kagaratsch 5
  • Request: Run quietly (surpress all prints) with sci-kit interface

    Request: Run quietly (surpress all prints) with sci-kit interface

    Hi, I am implementing ThunderGBM in an AutoML framework with the goal to optimise for speed. It would be great if we had the option to run .fit() and .predict() methods with no printing to stdout.

    I tried searching in the Python files but it became apparent that it is not there. I did see the a reply in another issue that this is currently not possible. If you tell me where/how to do this, I could give it a try.

    Thanks

    enhancement 
    opened by beevabeeva 5
  • Problems with random forest classifier when using more and deeper leaners

    Problems with random forest classifier when using more and deeper leaners

    Hi, I'm new to thunderGBM,

    I just run the example of random forest ''' from thundergbm import TGBMClassifier from sklearn.datasets import load_digits from sklearn.metrics import accuracy_score

    x, y = load_digits(return_X_y=True) clf = TGBMClassifier(bagging=1,depth=12, n_trees=1,n_parallel_trees=100) clf.fit(x, y) y_pred = clf.predict(x) accuracy = accuracy_score(y, y_pred) print(accuracy) ''' and several problems have arisen:

    First, I watch the verbose and found that, when set "n_trees=1", the classifier only use 1 leaner, no matter how I set the value of "n_parallel_trees", contrary to the claim in issue #42 .

    Furthermore, I try more and deeper learners, when "depth" is more than 20, or "n_trees" more than 70, the program may well crash. When I use python file, it turns out to be a Segmentation fault (core dumped), when I use jupyter notebook, the kernel died. When I try a large dataset with millions of samples, it crashed even when converting csr to csc. Cause I'm using a workstation with a CPU of 32 cores, 128 GB memory, and a RTX 3090 GPU, I don't believe this is a hardware issue. Is thunderGBM only capable to train really small forests on small datasets ? That's unacceptable. I'm confused and hope to see the power of thunderGBM.

    opened by OswaldHongyu 0
  • How to visualize the ThunderGBM?

    How to visualize the ThunderGBM?

    Is there a way to see what is under the hood of the random forest? Is there a way to get the parameters of a trained model? Or maybe you can get the individual decisions trees as a nested-if-statement?

    Thanks alot!

    opened by DarkoAlexander 3
  • AttributeError: 'TGBMClassifier' object has no attribute 'save'

    AttributeError: 'TGBMClassifier' object has no attribute 'save'

    Are there functions in thundergbm that can save a trained model to file and reload it from the file later?

    There is a function "clf.predict_proba()" in sklearn that calculates probabilities of the different labels. Is there a corresponding function in thundergbm?

    There is a function in sklearn that calculates the importance of the different features, called "clf.feature_importances_". Is there is a similar function in thundergbm?

    opened by DonSeger 1
  • Debug Assertion Failed

    Debug Assertion Failed

    I have installed thundergbm with cuda 11 support. But when I try to run the thundergbm classifier inside a nested loop in order to do a hyperparameter search, I get the following error:

    Debug Asertion Failed

    Python Version: 3.7.8 Microsoft Visual Studio 19: v.16.11.8 Cuda: 11.5 cmake: 3.22.1

    Below is the code that I run in a Python Jupyter Notebook:

    for n in range(100,700,100):           
        for d in range(3,16,1):             
            for c in [0.4,0.5,0.6,0.7,0.8]:   
                                                               
                clf = TGBMClassifier(depth=d, n_trees = 1, n_parallel_trees=n, bagging=1, column_sampling_rate=c, objective = 
                                                 "binary:logistic")
                clf.fit(X_train, y_train)
    

    What is the reason?

    opened by DonSeger 0
  • the Random Forest classifies everything to be 1

    the Random Forest classifies everything to be 1

    I am new to thundergbm, and just trying to get a simple Random Forest classifier going. But the classifier classifies every single sample to be 1. Not one single case out of 188244 samples is classified as 0. No other classifier behaves like this. I also tried different number of trees, depth etc. But it still classies everything to 1. Is there something wrong with the following code?

    from thundergbm import TGBMClassifier clf = TGBMClassifier(depth=6, n_trees = 1, n_parallel_trees=100, bagging=1) clf.fit(X_train, y_train) y_pred = clf.predict(X_test)

    #y_pred classifies everything in the test set (X_test) to one.

    opened by AlanSpencer2 2
Releases(0.3.2)
Owner
Xtra Computing Group
Xtra Computing Group
This repository contains implementations and illustrative code to accompany DeepMind publications

DeepMind Research This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to a

DeepMind 11.3k Dec 31, 2022
Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022

PGNet Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022, CVPR 2022 (arXiv 2204.05041) Abstract Recent salient objec

CVTEAM 109 Dec 05, 2022
SberSwap Video Swap base on deep learning

SberSwap Video Swap base on deep learning

Sber AI 431 Jan 03, 2023
Codes to calculate solar-sensor zenith and azimuth angles directly from hyperspectral images collected by UAV. Works only for UAVs that have high resolution GNSS/IMU unit.

UAV Solar-Sensor Angle Calculation Table of Contents About The Project Built With Getting Started Prerequisites Installation Datasets Contributing Lic

Sourav Bhadra 1 Jan 15, 2022
Python library for tracking human heads with FLAME (a 3D morphable head model)

Video Head Tracker 3D tracking library for human heads based on FLAME (a 3D morphable head model). The tracking algorithm is inspired by face2face. It

61 Dec 25, 2022
End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model

onnx-facial-lmk-detector End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model, model.onnx. Demo You can

atksh 42 Dec 30, 2022
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

ChongjianGE 89 Dec 02, 2022
Cards Against Humanity AI

cah-ai This is a Cards Against Humanity AI implemented using a pre-trained Semantic Search model. How it works A player is described by a combination

Alex Nichol 2 Aug 22, 2022
SCU OlympicsRunning Baseline

Competition 1v1 running Environment check details in Jidi Competition RLChina2021智能体竞赛 做出的修改: 奖励重塑:修改了环境,重新设置了奖励的分配,使得奖励组成不只有零和博弈,还有探索环境的奖励。 算法微调:修改了官

ZiSeoi Wong 2 Nov 23, 2021
Code for generating the figures in the paper "Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?"

Code for running simulations for the paper "Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Lin

Matthew Farrell 1 Nov 22, 2022
(Python, R, C/C++) Isolation Forest and variations such as SCiForest and EIF, with some additions (outlier detection + similarity + NA imputation)

IsoTree Fast and multi-threaded implementation of Extended Isolation Forest, Fair-Cut Forest, SCiForest (a.k.a. Split-Criterion iForest), and regular

141 Dec 29, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks This is the official repository for our paper: Sharpness-aware Quantization for Deep Neural Netw

Zhuang AI Group 30 Dec 19, 2022
This program can detect your face and add an Christams hat on the top of your head

Auto_Christmas This program can detect your face and add a Christmas hat to the top of your head. just run the Auto_Christmas.py, then you can see the

3 Dec 22, 2021
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
A method that utilized Generative Adversarial Network (GAN) to interpret the black-box deep image classifier models by PyTorch.

A method that utilized Generative Adversarial Network (GAN) to interpret the black-box deep image classifier models by PyTorch.

Yunxia Zhao 3 Dec 29, 2022
code for paper"A High-precision Semantic Segmentation Method Combining Adversarial Learning and Attention Mechanism"

PyTorch implementation of UAGAN(U-net Attention Generative Adversarial Networks) This repository contains the source code for the paper "A High-precis

Tong 8 Apr 25, 2022
一套完整的微博舆情分析流程代码,包括微博爬虫、LDA主题分析和情感分析。

已经将项目的关键文件上传,包含微博爬虫、LDA主题分析和情感分析三个部分。 1.微博爬虫 实现微博评论爬取和微博用户信息爬取,一天大概十万条。 2.LDA主题分析 实现文档主题抽取,包括数据清洗及分词、主题数的确定(主题一致性和困惑度)和最优主题模型的选择(暴力搜索)。 3.情感分析 实现评论文本的

182 Jan 02, 2023
A library that can print Python objects in human readable format

objprint A library that can print Python objects in human readable format Install pip install objprint Usage op Use op() (or objprint()) to print obj

319 Dec 25, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Code accompanying "Adaptive Methods for Aggregated Domain Generalization"

Adaptive Methods for Aggregated Domain Generalization (AdaClust) Official Pytorch Implementation of Adaptive Methods for Aggregated Domain Generalizat

Xavier Thomas 15 Sep 20, 2022