ThunderGBM: Fast GBDTs and Random Forests on GPUs

Overview

Documentation Status GitHub license GitHub issues PyPI version Downloads

Documentations | Installation | Parameters | Python (scikit-learn) interface

What's new?

ThunderGBM won 2019 Best Paper Award from IEEE Transactions on Parallel and Distributed Systems by the IEEE Computer Society Publications Board (1 out of 987 submissions, for the work "Zeyi Wen^, Jiashuai Shi*, Bingsheng He, Jian Chen, Kotagiri Ramamohanarao, and Qinbin Li*, Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training , IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 12, 2019, pp. 2706-2717."). see more details: Best Paper Award Winners from IEEE, News from NUS School of Computing

Overview

The mission of ThunderGBM is to help users easily and efficiently apply GBDTs and Random Forests to solve problems. ThunderGBM exploits GPUs to achieve high efficiency. Key features of ThunderGBM are as follows.

  • Often by 10x times over other libraries.
  • Support Python (scikit-learn) interfaces.
  • Supported Operating System(s): Linux and Windows.
  • Support classification, regression and ranking.

Why accelerate GBDT and Random Forests: A survey conducted by Kaggle in 2017 shows that 50%, 46% and 24% of the data mining and machine learning practitioners are users of Decision Trees, Random Forests and GBMs, respectively.

GBDTs and Random Forests are often used for creating state-of-the-art data science solutions. We've listed three winning solutions using GBDTs below. Please check out the XGBoost website for more winning solutions and use cases. Here are some example successes of GDBTs and Random Forests:

Getting Started

Prerequisites

  • cmake 2.8 or above
    • gcc 4.8 or above for Linux | CUDA 9 or above
    • Visual C++ for Windows | CUDA 10

Quick Install

  • For Linux with CUDA 9.0

    • pip install thundergbm
  • For Windows (64bit)

    • Download the Python wheel file (for Python3 or above)

    • Install the Python wheel file

      • pip install thundergbm-0.3.4-py3-none-win_amd64.whl
  • Currently only support python3

  • After you have installed thundergbm, you can import and use the classifier (similarly for regressor) by:

from thundergbm import TGBMClassifier
clf = TGBMClassifier()
clf.fit(x, y)

Build from source

git clone https://github.com/zeyiwen/thundergbm.git
cd thundergbm
#under the directory of thundergbm
git submodule init cub && git submodule update

Build on Linux (build instructions for Windows)

#under the directory of thundergbm
mkdir build && cd build && cmake .. && make -j

Quick Start

./bin/thundergbm-train ../dataset/machine.conf
./bin/thundergbm-predict ../dataset/machine.conf

You will see RMSE = 0.489562 after successful running.

MacOS is not supported, as Apple has suspended support for some NVIDIA GPUs. We will consider supporting MacOS based on our user community feedbacks. Please stay tuned.

How to cite ThunderGBM

If you use ThunderGBM in your paper, please cite our work (TPDS and JMLR).

@ARTICLE{8727750,
  author={Z. {Wen} and J. {Shi} and B. {He} and J. {Chen} and K. {Ramamohanarao} and Q. {Li}},
  journal={IEEE Transactions on Parallel and Distributed Systems}, 
  title={Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training}, 
  year={2019},
  volume={30},
  number={12},
  pages={2706-2717},
  }

@article{wenthundergbm19,
 author = {Wen, Zeyi and Shi, Jiashuai and He, Bingsheng and Li, Qinbin and Chen, Jian},
 title = {{ThunderGBM}: Fast {GBDTs} and Random Forests on {GPUs}},
 journal = {Journal of Machine Learning Research},
 volume={21},
 year = {2020}
}

Related papers

  • Zeyi Wen, Jiashuai Shi, Bingsheng He, Jian Chen, Kotagiri Ramamohanarao and Qinbin Li. Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training. IEEE Transactions on Parallel and Distributed Systems (TPDS), accepted in May 2019. pdf

  • Zeyi Wen, Hanfeng Liu, Jiashuai Shi, Qinbin Li, Bingsheng He, Jian Chen. ThunderGBM: Fast GBDTs and Random Forests on GPUs. Featured at JMLR MLOSS (Machine Learning Open Source Software). Year: 2020, Volume: 21, Issue: 108, Pages: 1−5. pdf

  • Zeyi Wen, Bingsheng He, Kotagiri Ramamohanarao, Shengliang Lu, and Jiashuai Shi. Efficient Gradient Boosted Decision Tree Training on GPUs. The 32nd IEEE Intern ational Parallel and Distributed Processing Symposium (IPDPS), pages 234-243, 2018. pdf

Key members of ThunderGBM

  • Zeyi Wen, NUS (now at The University of Western Australia)
  • Hanfeng Liu, GDUFS (a visting student at NUS)
  • Jiashuai Shi, SCUT (a visiting student at NUS)
  • Qinbin Li, NUS
  • Advisor: Bingsheng He, NUS
  • Collaborators: Jian Chen (SCUT)

Other information

  • This work is supported by a MoE AcRF Tier 2 grant (MOE2017-T2-1-122) and an NUS startup grant in Singapore.

Related libraries

Comments
  • build boost library error(win10)

    build boost library error(win10)

    python2.7.15

    boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_coroutine-1.70.0\boost_coroutine-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_coroutine-1.70.0\boost_coroutine-config-version.cmake boost-install.generate-cmake-variant- D:\boost\boost_1_70_0\lib\cmake\boost_coroutine-1.70.0\libboost_coroutine-variant-vc140-mt-gd-x32-1_70-shared.cmake boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_thread-1.70.0\boost_thread-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_thread-1.70.0\boost_thread-config-version.cmake boost-install.generate-cmake-variant- D:\boost\boost_1_70_0\lib\cmake\boost_thread-1.70.0\libboost_thread-variant-vc140-mt-gd-x32-1_70-shared.cmake boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_date_time-1.70.0\boost_date_time-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_date_time-1.70.0\boost_date_time-config-version.cmake boost-install.generate-cmake-variant- D:\boost\boost_1_70_0\lib\cmake\boost_date_time-1.70.0\libboost_date_time-variant-vc140-mt-gd-x32-1_70-shared.cmake boost-install.generate-cmake-config- D:\boost\boost_1_70_0\lib\cmake\boost_exception-1.70.0\boost_exception-config.cmake boost-install.generate-cmake-config-version- D:\boost\boost_1_70_0\lib\cmake\boost_exception-1.70.0\boost_exception-config-version.cmake ...failed updating 2949 targets... ...skipped 16 targets... ...updated 12449 targets...

    opened by lvpinrui 22
  • There was an error using thundergbm-predict file

    There was an error using thundergbm-predict file

    My environment is: Windows 10 Visual Studio 2017 Community CMake3.14.0-rc3 CUDA10.1.105 Using test_dataset.txt in the library is successful,but Using my own .txt is failed. my thundergbm-predict: image

    When building thundergbm-predict : image

    opened by lvpinrui 13
  • Make error

    Make error

    Hi, I am getting the following error when calling make -j

    [  3%] Building CXX object src/thundergbm/CMakeFiles/thundergbm.dir/objective/ranking_obj.cpp.o
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp: In member function ‘virtual void LambdaRank::get_gradient(const SyncArray<float>&, const SyncArray<float>&, SyncArray<GHPair>&)’:
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:50:14: error: ‘mt19937’ is not a member of ‘std’
             std::mt19937 gen(std::rand());
                  ^~~~~~~
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:50:14: note: suggested alternative:
    In file included from /usr/include/c++/7/tr1/random:47:0,
                     from /usr/include/c++/7/parallel/random_number.h:36,
                     from /usr/include/c++/7/parallel/partition.h:38,
                     from /usr/include/c++/7/parallel/quicksort.h:36,
                     from /usr/include/c++/7/parallel/sort.h:48,
                     from /usr/include/c++/7/parallel/algo.h:45,
                     from /usr/include/c++/7/parallel/algorithm:37,
                     from /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:6:
    /usr/include/c++/7/tr1/random.h:701:7: note:   ‘std::tr1::mt19937’
         > mt19937;
           ^~~~~~~
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:61:33: error: ‘gen’ was not declared in this scope
                         int m = dis(gen);
                                     ^~~
    /tmp/thundergbm/src/thundergbm/objective/ranking_obj.cpp:61:33: note: suggested alternative: ‘len’
                         int m = dis(gen);
                                     ^~~
                                     len
    src/thundergbm/CMakeFiles/thundergbm.dir/build.make:11442: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/objective/ranking_obj.cpp.o' failed
    make[2]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/objective/ranking_obj.cpp.o] Error 1
    CMakeFiles/Makefile2:131: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/all' failed
    make[1]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/all] Error 2
    Makefile:83: recipe for target 'all' failed
    make: *** [all] Error 2
    
    
    opened by psinger 13
  • build thundergbm

    build thundergbm

    src/thundergbm/CMakeFiles/thundergbm.dir/build.make:147: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/thundergbm_generated_tree.cu.o' failed ...

    I have download the latest version of ThunderGBM, but it still can't make it. can you tell me how to fix it? Thank you very much!

    opened by mmobai 10
  • "File Modification Detected" and errors in Visual Studio

    downloaded the branch with cuda 11 support, for win10 and followed the instructions. When building the thundergbm.sln file in Visual Studio, I get a message that says "File Modification Detected". When choosing "Reload All" or any of the other options, the building ends up with a bunch of errors and bugs. Please see the pictures below for more details.

    These are the steps I took before getting to the building process: git clone -b support_cuda11 --single-branch https://github.com/zeyiwen/thundergbm.git cd thundergbm git submodule init cub && git submodule update cd thundergbm mkdir build cd build cmake .. -DCMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=TRUE -DBUILD_SHARED_LIBS=TRUE -G "Visual Studio 16 2019" the last step is to open the file thundergbm.sln and click on "build solution" in the "Build" menu in Visual Studio.

    Here is the message window.

    File Modification Detected

    Down below is the errors generated by the building process, in the files xutility, xmemory, atomic and LINK.

    errors thundergbm

    Any idea what might be causing this?

    opened by AlanSpencer2 7
  • Bug: Memory Leak in sci-kit interface

    Bug: Memory Leak in sci-kit interface

    Hello,

    I'm running TGBM on a Windows 10 machine with Cuda 10. In my use case I create a lot of different GBDTs, which go out of scope after some time. The problem is, that the gpu memory stays allocated.

    My expectation were, that when pythons garbage collector deletes the instance, the memory gets freed. However, even when I explicitly delete the instance, the memory stays allocated.

    I think to resolve this Issue in "scikit_tgbm.cpp" the model_free function must be extended to free the allocated memory. Currently, I don't know how to do this. But maybe if I find time I will take a deeper look in memory management with Cuda.

    I found a very very ugly workaround by changing some code in the "thundergbm.py". I extended the del function of TGBMModel to reload the whole dll. This isn't a good solution but okayish as a workaround if really required by somebody.

    opened by Tripton 7
  • Building from source on ubuntu18.04 with CUDA11.0

    Building from source on ubuntu18.04 with CUDA11.0

    Hi, I am trying to build thundergbm on ubuntu18.04 with CUDA 11.0 using the instructions here. While building the binary, I get a string of warnings about C++14 (from CUB and THRUST), but the build proceeds until it hits this error:

    thundergbm/src/thundergbm/sparse_columns.cu(52): error: identifier "cusparseScsr2csc" is undefined 1 error detected in the compilation of "thundergbm/src/thundergbm/sparse_columns.cu". CMake Error at thundergbm_generated_sparse_columns.cu.o.Release.cmake:279 (message): Error generating file thundergbm/build/src/thundergbm/CMakeFiles/thundergbm.dir//./thundergbm_generated_sparse_columns.cu.o src/thundergbm/CMakeFiles/thundergbm.dir/build.make:9146: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/thundergbm_generated_sparse_columns.cu.o' failed make[2]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/thundergbm_generated_sparse_columns.cu.o] Error 1 CMakeFiles/Makefile2:126: recipe for target 'src/thundergbm/CMakeFiles/thundergbm.dir/all' failed make[1]: *** [src/thundergbm/CMakeFiles/thundergbm.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    The line that is seemingly causing the problem (sparse_columns.cu(52)) is

    usparseScsr2csc(handle, dataset.n_instances(), n_column, nnz, val.device_data(), row_ptr.device_data(), col_idx.device_data(), csc_val.device_data(), csc_row_idx.device_data(), csc_col_ptr.device_data(), CUSPARSE_ACTION_NUMERIC, CUSPARSE_INDEX_BASE_ZERO);

    Any suggestions on how to get around this?

    Thanks.

    call for contribution 
    opened by sensharma 6
  • CUDA Error 29: driver shutting down

    CUDA Error 29: driver shutting down

    I get the following error at the end of my training for TGBMRegressor with default parameters, this doesn't allow the python process to release the GPU memory and it stays there.

    2019-06-27 16:15:43,952 INFO [default] #instances = 1280, #features = 729 2019-06-27 16:15:44,031 INFO [default] copy csr matrix to GPU 2019-06-27 16:15:44,220 INFO [default] converting csr matrix to csc matrix 2019-06-27 16:15:44,483 INFO [default] Getting cut points... 2019-06-27 16:15:44,485 INFO [default] ################>>>: 1 2019-06-27 16:15:44,491 INFO [default] ----------------->>> Last LOOP: 0.0072718 2019-06-27 16:15:44,494 INFO [default] TOTAL CP:152949 2019-06-27 16:15:45,617 INFO [default] RMSE = 3.72949 2019-06-27 16:15:45,645 INFO [default] RMSE = 3.3029 2019-06-27 16:15:45,672 INFO [default] RMSE = 2.93213 2019-06-27 16:15:45,673 INFO [default] training time = 1.17665 2019-06-27 16:15:45,711 INFO [default] #instances = 549, #features = 729 CUDA error 29 [D:\thundergbm\src\thundergbm\syncmem.cpp, 576]: driver shutting down CUDA error 29 [D:\thundergbm\cub\cub/util_allocator.cuh, 657]: driver shutting down

    enhancement 
    opened by talhachattha 6
  • Make error

    Make error

    I am getting this error after executing make -j:

    /thundergbm/src/thundergbm/hist_cut.cu(142): error: a reference of type "SyncArray<float> &" (not const-qualified) cannot be initialized with a value of type "SyncArray<float_type>"
    

    I am using the code of the support_cuda11 branch (as my CUDA version is 11).

    opened by mahi045 5
  • FATAL [default] Check failed: [error == cudaSuccess]  out of memory

    FATAL [default] Check failed: [error == cudaSuccess] out of memory

    My system is:

    (_env) D:_env\project\series>nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 442.19 Driver Version: 442.19 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+=================| | 0 GeForce RTX 2070 WDDM | 00000000:01:00.0 Off | N/A | | N/A 46C P8 7W / N/A | 219MiB / 8192MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |==============================================| | 0 7852 C+G ...xperience\NVIDIA GeForce Experience.exe N/A | +-----------------------------------------------------------------------------+

    I am on Windows 10

    When trying to run the TGBMClassifier I get the following error:

    2020-02-22 20:13:15,901 INFO [default] #instances = 20289, #features = 40 2020-02-22 20:13:16,503 INFO [default] convert csr to csc using gpu... 2020-02-22 20:13:16,878 INFO [default] Converting csr to csc using time: 0.363863 s 2020-02-22 20:13:16,878 INFO [default] Fast getting cut points... 2020-02-22 20:13:16,878 FATAL [default] Check failed: [error == cudaSuccess] out of memory 2020-02-22 20:13:16,894 WARNING [default] Aborting application. Reason: Fatal log at [D:_env\project\series\thundergbm\src\thundergbm\syncmem.cpp:107]

    Any suggestion how to fix this?

    opened by Kagaratsch 5
  • Request: Run quietly (surpress all prints) with sci-kit interface

    Request: Run quietly (surpress all prints) with sci-kit interface

    Hi, I am implementing ThunderGBM in an AutoML framework with the goal to optimise for speed. It would be great if we had the option to run .fit() and .predict() methods with no printing to stdout.

    I tried searching in the Python files but it became apparent that it is not there. I did see the a reply in another issue that this is currently not possible. If you tell me where/how to do this, I could give it a try.

    Thanks

    enhancement 
    opened by beevabeeva 5
  • Problems with random forest classifier when using more and deeper leaners

    Problems with random forest classifier when using more and deeper leaners

    Hi, I'm new to thunderGBM,

    I just run the example of random forest ''' from thundergbm import TGBMClassifier from sklearn.datasets import load_digits from sklearn.metrics import accuracy_score

    x, y = load_digits(return_X_y=True) clf = TGBMClassifier(bagging=1,depth=12, n_trees=1,n_parallel_trees=100) clf.fit(x, y) y_pred = clf.predict(x) accuracy = accuracy_score(y, y_pred) print(accuracy) ''' and several problems have arisen:

    First, I watch the verbose and found that, when set "n_trees=1", the classifier only use 1 leaner, no matter how I set the value of "n_parallel_trees", contrary to the claim in issue #42 .

    Furthermore, I try more and deeper learners, when "depth" is more than 20, or "n_trees" more than 70, the program may well crash. When I use python file, it turns out to be a Segmentation fault (core dumped), when I use jupyter notebook, the kernel died. When I try a large dataset with millions of samples, it crashed even when converting csr to csc. Cause I'm using a workstation with a CPU of 32 cores, 128 GB memory, and a RTX 3090 GPU, I don't believe this is a hardware issue. Is thunderGBM only capable to train really small forests on small datasets ? That's unacceptable. I'm confused and hope to see the power of thunderGBM.

    opened by OswaldHongyu 0
  • How to visualize the ThunderGBM?

    How to visualize the ThunderGBM?

    Is there a way to see what is under the hood of the random forest? Is there a way to get the parameters of a trained model? Or maybe you can get the individual decisions trees as a nested-if-statement?

    Thanks alot!

    opened by DarkoAlexander 3
  • AttributeError: 'TGBMClassifier' object has no attribute 'save'

    AttributeError: 'TGBMClassifier' object has no attribute 'save'

    Are there functions in thundergbm that can save a trained model to file and reload it from the file later?

    There is a function "clf.predict_proba()" in sklearn that calculates probabilities of the different labels. Is there a corresponding function in thundergbm?

    There is a function in sklearn that calculates the importance of the different features, called "clf.feature_importances_". Is there is a similar function in thundergbm?

    opened by DonSeger 1
  • Debug Assertion Failed

    Debug Assertion Failed

    I have installed thundergbm with cuda 11 support. But when I try to run the thundergbm classifier inside a nested loop in order to do a hyperparameter search, I get the following error:

    Debug Asertion Failed

    Python Version: 3.7.8 Microsoft Visual Studio 19: v.16.11.8 Cuda: 11.5 cmake: 3.22.1

    Below is the code that I run in a Python Jupyter Notebook:

    for n in range(100,700,100):           
        for d in range(3,16,1):             
            for c in [0.4,0.5,0.6,0.7,0.8]:   
                                                               
                clf = TGBMClassifier(depth=d, n_trees = 1, n_parallel_trees=n, bagging=1, column_sampling_rate=c, objective = 
                                                 "binary:logistic")
                clf.fit(X_train, y_train)
    

    What is the reason?

    opened by DonSeger 0
  • the Random Forest classifies everything to be 1

    the Random Forest classifies everything to be 1

    I am new to thundergbm, and just trying to get a simple Random Forest classifier going. But the classifier classifies every single sample to be 1. Not one single case out of 188244 samples is classified as 0. No other classifier behaves like this. I also tried different number of trees, depth etc. But it still classies everything to 1. Is there something wrong with the following code?

    from thundergbm import TGBMClassifier clf = TGBMClassifier(depth=6, n_trees = 1, n_parallel_trees=100, bagging=1) clf.fit(X_train, y_train) y_pred = clf.predict(X_test)

    #y_pred classifies everything in the test set (X_test) to one.

    opened by AlanSpencer2 2
Releases(0.3.2)
Owner
Xtra Computing Group
Xtra Computing Group
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 05, 2022
Grounding Representation Similarity with Statistical Testing

Grounding Representation Similarity with Statistical Testing This repo contains code to replicate the results in our paper, which evaluates representa

26 Dec 02, 2022
Net2net - Network-to-Network Translation with Conditional Invertible Neural Networks

Net2Net Code accompanying the NeurIPS 2020 oral paper Network-to-Network Translation with Conditional Invertible Neural Networks Robin Rombach*, Patri

CompVis Heidelberg 206 Dec 20, 2022
ICLR2021 (Under Review)

Self-Supervised Time Series Representation Learning by Inter-Intra Relational Reasoning This repository contains the official PyTorch implementation o

Haoyi Fan 58 Dec 30, 2022
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

RNN-for-Joint-NLU Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

Kim SungDong 194 Dec 28, 2022
HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images Histological Image Segmentation This

Saad Wazir 11 Dec 16, 2022
Open source code for Paper "A Co-Interactive Transformer for Joint Slot Filling and Intent Detection"

A Co-Interactive Transformer for Joint Slot Filling and Intent Detection This repository contains the PyTorch implementation of the paper: A Co-Intera

67 Dec 05, 2022
Implements an infinite sum of poisson-weighted convolutions

An infinite sum of Poisson-weighted convolutions Kyle Cranmer, Aug 2018 If viewing on GitHub, this looks better with nbviewer: click here Consider a v

Kyle Cranmer 26 Dec 07, 2022
The dynamics of representation learning in shallow, non-linear autoencoders

The dynamics of representation learning in shallow, non-linear autoencoders The package is written in python and uses the pytorch implementation to ML

Maria Refinetti 4 Jun 08, 2022
Pytorch implementation of YOLOX、PPYOLO、PPYOLOv2、FCOS an so on.

简体中文 | English miemiedetection 概述 miemiedetection是女装大佬咩酱基于YOLOX进行二次开发的个人检测库(使用的深度学习框架为pytorch),支持Windows、Linux系统,以女装大佬咩酱的名字命名。miemiedetection是一个不需要安装的

248 Jan 02, 2023
A PyTorch-based R-YOLOv4 implementation which combines YOLOv4 model and loss function from R3Det for arbitrary oriented object detection.

R-YOLOv4 This is a PyTorch-based R-YOLOv4 implementation which combines YOLOv4 model and loss function from R3Det for arbitrary oriented object detect

94 Dec 03, 2022
This is an open solution to the Home Credit Default Risk challenge 🏡

Home Credit Default Risk: Open Solution This is an open solution to the Home Credit Default Risk challenge 🏡 . More competitions 🎇 Check collection

minerva.ml 427 Dec 27, 2022
UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac protocols on unmanned aerial vehicle networks.

UAV-Networks Simulator - Autonomous Networking - A.A. 20/21 UAV-Networks-Routing is a Python simulator for experimenting routing algorithms and mac pr

0 Nov 13, 2021
Code for the paper "Curriculum Dropout", ICCV 2017

Curriculum Dropout Dropout is a very effective way of regularizing neural networks. Stochastically "dropping out" units with a certain probability dis

Pietro Morerio 21 Jan 02, 2022
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Fast Training of Neural Lumigraph Representations using Meta Learning Project Page | Paper | Data Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzst

Alex 39 Oct 08, 2022
Code for Paper Predicting Osteoarthritis Progression via Unsupervised Adversarial Representation Learning

Predicting Osteoarthritis Progression via Unsupervised Adversarial Representation Learning (c) Tianyu Han and Daniel Truhn, RWTH Aachen University, 20

Tianyu Han 7 Nov 22, 2022
Code to generate datasets used in "How Useful is Self-Supervised Pretraining for Visual Tasks?"

Synthetic dataset rendering Framework for producing the synthetic datasets used in: How Useful is Self-Supervised Pretraining for Visual Tasks? Alejan

Princeton Vision & Learning Lab 21 Apr 29, 2022
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
DeepLab resnet v2 model in pytorch

pytorch-deeplab-resnet DeepLab resnet v2 model implementation in pytorch. The architecture of deepLab-ResNet has been replicated exactly as it is from

Isht Dwivedi 601 Dec 22, 2022
Stacs-ci - A set of modules to enable integration of STACS with commonly used CI / CD systems

Static Token And Credential Scanner CI Integrations What is it? STACS is a YARA

STACS 18 Aug 04, 2022