GPU-Accelerated Deep Learning Library in Python

Related tags

Deep Learning hebel
Overview

Hebel

GPU-Accelerated Deep Learning Library in Python

Hebel is a library for deep learning with neural networks in Python using GPU acceleration with CUDA through PyCUDA. It implements the most important types of neural network models and offers a variety of different activation functions and training methods such as momentum, Nesterov momentum, dropout, and early stopping.

I no longer actively develop Hebel. If you are looking for a deep learning framework in Python, I now recommend Chainer.

Models

Right now, Hebel implements feed-forward neural networks for classification and regression on one or multiple tasks. Other models such as Autoencoder, Convolutional neural nets, and Restricted Boltzman machines are planned for the future.

Hebel implements dropout as well as L1 and L2 weight decay for regularization.

Optimization

Hebel implements stochastic gradient descent (SGD) with regular and Nesterov momentum.

Compatibility

Currently, Hebel will run on Linux and Windows, and probably Mac OS X (not tested).

Dependencies

  • PyCUDA
  • numpy
  • PyYAML
  • skdata (only for MNIST example)

Installation

Hebel is on PyPi, so you can install it with

pip install hebel

Getting started

Study the yaml configuration files in examples/ and run

python train_model.py examples/mnist_neural_net_shallow.yml

The script will create a directory in examples/mnist where the models and logs are saved.

Read the Getting started guide at hebel.readthedocs.org/en/latest/getting_started.html for more information.

Documentation

hebel.readthedocs.org

Contact

Maintained by Hannes Bretschneider ([email protected]). If your are using Hebel, please let me know whether you find it useful and file a Github issue if you find any bugs or have feature requests.

Citing

http://dx.doi.org/10.5281/zenodo.10050

If you make use of Hebel in your research, please cite it. The BibTeX reference is

@article{Bretschneider:10050,
  author        = "Hannes Bretschneider",
  title         = "{Hebel - GPU-Accelerated Deep Learning Library in Python}",
  month         = "May",
  year          = "2014",
  doi           = "10.5281/zenodo.10050",
  url           = "https://zenodo.org/record/10050",
}

What's with the name?

Hebel is the German word for lever, one of the oldest tools that humans use. As Archimedes said it: "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world."

Issues
  • Contributing PyCUDA routines

    Contributing PyCUDA routines

    Heya

    I stumbled across this project looking for some PyCUDA routines that operate on matrices per-row or per-column. It seems you have a bunch of handy routines for this, which is awesome, e.g. row-wise maximum, add_vec_to_mat etc.

    Would you be willing to contribute them back to PyCUDA? a lot of these routines seem like they'd definitely be useful more widely. And perhaps offering the contribution might give the PyCUDA guys some inspiration or a kick in the arse to create a more general partial reductions API (like numpy's axis=0 arguments) and broadcasting behaviour for element-wise operations on GPUArrays? (I would attempt this myself but my CUDA-fu is weak)

    Just a thought anyway. I would suggest it to them myself but the licencing is different (GPL vs MIT)

    Cheers!

    opened by mjwillson 6
  • Global name 'hidden_inputs' is not defined

    Global name 'hidden_inputs' is not defined

    When running optimizer.run(100), an error occurred: global name 'hidden_inputs' is not defined in line 323 of ./hebel/hebel/models/neurals_net.py

    Where to define the global variable 'hidden_inputs'? Thanks!

    opened by Robert0812 3
  • Hebel is in PiPy but it's not in PiPy ;)

    Hebel is in PiPy but it's not in PiPy ;)

    I guess you did: `python setup.py sdist register

    opened by mnowotka 3
  • [WIP][HEP3] Implement convolution for DNA sequence

    [WIP][HEP3] Implement convolution for DNA sequence

    I am merging my code for training conv-nets from DNA sequence into Hebel. This should be done by the end of January 2015. Please follow this issue if you are interested in using Hebel for learning from DNA sequence or would like to test it.

    Hebel Enhancement Proposal 
    opened by hannes-brt 3
  • Compiling issues with MacOSX

    Compiling issues with MacOSX

    I am trying to compile in Mac OSX yosemite and it seems hebel is not running. i installed PyCUDA and other libraries needed but stuck at this error.

    $ python hebel_test.py Traceback (most recent call last): File "hebel_test.py", line 18, in hebel.init(0) File "/Users/prabhubalakrishnan/Desktop/hebel/hebel/init.py", line 131, in init from pycuda import gpuarray, driver, curandom File "/Library/Python/2.7/site-packages/pycuda-2014.1-py2.7-macosx-10.10-intel.egg/pycuda/gpuarray.py", line 3, in import pycuda.elementwise as elementwise File "/Library/Python/2.7/site-packages/pycuda-2014.1-py2.7-macosx-10.10-intel.egg/pycuda/elementwise.py", line 34, in from pytools import memoize_method File "/Library/Python/2.7/site-packages/pytools-2014.3.5-py2.7.egg/pytools/init.py", line 5, in from six.moves import range, zip, intern, input ImportError: cannot import name intern

    How to fix?

    opened by olddocks 3
  • AttributeError: python: undefined symbol: cuPointerGetAttribute

    AttributeError: python: undefined symbol: cuPointerGetAttribute

    [email protected]:~/github/hebel$ echo $LD_LIBRARY_PATH /usr/local/cuda:/usr/local/cuda/bin:/usr/local/cuda/lib64:/home/ubgpu/torch/install/lib:/home/ubgpu/torch/install/lib [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ python train_model.py examples/mnist_neural_net_shallow.yml Traceback (most recent call last): File "train_model.py", line 39, in run_from_config(yaml_src) File "/home/ubgpu/github/hebel/hebel/config.py", line 41, in run_from_config config = load(yaml_src) File "/home/ubgpu/github/hebel/hebel/config.py", line 92, in load proxy_graph = yaml.load(string, **kwargs) File "/usr/local/lib/python2.7/dist-packages/yaml/init.py", line 71, in load return loader.get_single_data() File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 39, in get_single_data return self.construct_document(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 48, in construct_document for dummy in generator: File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 398, in construct_yaml_map value = self.construct_mapping(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 208, in construct_mapping return BaseConstructor.construct_mapping(self, node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 133, in construct_mapping value = self.construct_object(value_node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 90, in construct_object data = constructor(self, tag_suffix, node) File "/home/ubgpu/github/hebel/hebel/config.py", line 318, in multi_constructor mapping = loader.construct_mapping(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 208, in construct_mapping return BaseConstructor.construct_mapping(self, node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 133, in construct_mapping value = self.construct_object(value_node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 90, in construct_object data = constructor(self, tag_suffix, node) File "/home/ubgpu/github/hebel/hebel/config.py", line 318, in multi_constructor mapping = loader.construct_mapping(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 208, in construct_mapping return BaseConstructor.construct_mapping(self, node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 133, in construct_mapping value = self.construct_object(value_node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 90, in construct_object data = constructor(self, tag_suffix, node) File "/home/ubgpu/github/hebel/hebel/config.py", line 323, in multi_constructor classname = try_to_import(tag_suffix) File "/home/ubgpu/github/hebel/hebel/config.py", line 251, in try_to_import exec('import %s' % modulename) File "", line 1, in File "/home/ubgpu/github/hebel/hebel/layers/init.py", line 17, in from .dummy_layer import DummyLayer File "/home/ubgpu/github/hebel/hebel/layers/dummy_layer.py", line 17, in from .hidden_layer import HiddenLayer File "/home/ubgpu/github/hebel/hebel/layers/hidden_layer.py", line 25, in from ..pycuda_ops import linalg File "/home/ubgpu/github/hebel/hebel/pycuda_ops/linalg.py", line 32, in from . import cublas File "/home/ubgpu/github/hebel/hebel/pycuda_ops/cublas.py", line 47, in import cuda File "/home/ubgpu/github/hebel/hebel/pycuda_ops/cuda.py", line 36, in from cudadrv import * File "/home/ubgpu/github/hebel/hebel/pycuda_ops/cudadrv.py", line 233, in _libcuda.cuPointerGetAttribute.restype = int File "/usr/lib/python2.7/ctypes/init.py", line 378, in getattr func = self.getitem(name) File "/usr/lib/python2.7/ctypes/init.py", line 383, in getitem func = self._FuncPtr((name_or_ordinal, self)) AttributeError: python: undefined symbol: cuPointerGetAttribute [email protected]:~/github/hebel$

    opened by andyyuan78 2
  • Windows/Python 3 (Seems like a Python 3 error)

    Windows/Python 3 (Seems like a Python 3 error)

    Does this work on Python 3, Windows? I'm facing some installation issues.

    opened by nareshshah139 1
  • OSError: CUDA runtime library not found

    OSError: CUDA runtime library not found

    [email protected]:~/github/hebel$ sudo pip install pyCUDA Requirement already satisfied (use --upgrade to upgrade): pyCUDA in /usr/local/lib/python2.7/dist-packages Requirement already satisfied (use --upgrade to upgrade): decorator>=3.2.0 in /usr/local/lib/python2.7/dist-packages (from pyCUDA) Requirement already satisfied (use --upgrade to upgrade): pytools>=2011.2 in /usr/local/lib/python2.7/dist-packages (from pyCUDA) Requirement already satisfied (use --upgrade to upgrade): pytest>=2 in /usr/local/lib/python2.7/dist-packages (from pyCUDA) Requirement already satisfied (use --upgrade to upgrade): appdirs>=1.4.0 in /usr/local/lib/python2.7/dist-packages (from pytools>=2011.2->pyCUDA) Requirement already satisfied (use --upgrade to upgrade): six in /usr/local/lib/python2.7/dist-packages (from pytools>=2011.2->pyCUDA) Requirement already satisfied (use --upgrade to upgrade): py>=1.4.25 in /usr/local/lib/python2.7/dist-packages (from pytest>=2->pyCUDA) [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information.

    quit() [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ sudo pip install PyCUDA Requirement already satisfied (use --upgrade to upgrade): PyCUDA in /usr/local/lib/python2.7/dist-packages Requirement already satisfied (use --upgrade to upgrade): decorator>=3.2.0 in /usr/local/lib/python2.7/dist-packages (from PyCUDA) Requirement already satisfied (use --upgrade to upgrade): pytools>=2011.2 in /usr/local/lib/python2.7/dist-packages (from PyCUDA) Requirement already satisfied (use --upgrade to upgrade): pytest>=2 in /usr/local/lib/python2.7/dist-packages (from PyCUDA) Requirement already satisfied (use --upgrade to upgrade): appdirs>=1.4.0 in /usr/local/lib/python2.7/dist-packages (from pytools>=2011.2->PyCUDA) Requirement already satisfied (use --upgrade to upgrade): six in /usr/local/lib/python2.7/dist-packages (from pytools>=2011.2->PyCUDA) Requirement already satisfied (use --upgrade to upgrade): py>=1.4.25 in /usr/local/lib/python2.7/dist-packages (from pytest>=2->PyCUDA) [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ echo $PYTHONPATH /usr/local/lib/python2.7/dist-packages [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ [email protected]:~/github/hebel$ python train_model.py examples/mnist_neural_net_shallow.yml Traceback (most recent call last): File "train_model.py", line 39, in run_from_config(yaml_src) File "/home/ubgpu/github/hebel/hebel/config.py", line 41, in run_from_config config = load(yaml_src) File "/home/ubgpu/github/hebel/hebel/config.py", line 92, in load proxy_graph = yaml.load(string, **kwargs) File "/usr/local/lib/python2.7/dist-packages/yaml/init.py", line 71, in load return loader.get_single_data() File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 39, in get_single_data return self.construct_document(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 48, in construct_document for dummy in generator: File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 398, in construct_yaml_map value = self.construct_mapping(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 208, in construct_mapping return BaseConstructor.construct_mapping(self, node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 133, in construct_mapping value = self.construct_object(value_node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 90, in construct_object data = constructor(self, tag_suffix, node) File "/home/ubgpu/github/hebel/hebel/config.py", line 318, in multi_constructor mapping = loader.construct_mapping(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 208, in construct_mapping return BaseConstructor.construct_mapping(self, node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 133, in construct_mapping value = self.construct_object(value_node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 90, in construct_object data = constructor(self, tag_suffix, node) File "/home/ubgpu/github/hebel/hebel/config.py", line 318, in multi_constructor mapping = loader.construct_mapping(node) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 208, in construct_mapping return BaseConstructor.construct_mapping(self, node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 133, in construct_mapping value = self.construct_object(value_node, deep=deep) File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 90, in construct_object data = constructor(self, tag_suffix, node) File "/home/ubgpu/github/hebel/hebel/config.py", line 323, in multi_constructor classname = try_to_import(tag_suffix) File "/home/ubgpu/github/hebel/hebel/config.py", line 251, in try_to_import exec('import %s' % modulename) File "", line 1, in File "/home/ubgpu/github/hebel/hebel/layers/init.py", line 17, in from .dummy_layer import DummyLayer File "/home/ubgpu/github/hebel/hebel/layers/dummy_layer.py", line 17, in from .hidden_layer import HiddenLayer File "/home/ubgpu/github/hebel/hebel/layers/hidden_layer.py", line 25, in from ..pycuda_ops import linalg File "/home/ubgpu/github/hebel/hebel/pycuda_ops/linalg.py", line 32, in from . import cublas File "/home/ubgpu/github/hebel/hebel/pycuda_ops/cublas.py", line 47, in import cuda File "/home/ubgpu/github/hebel/hebel/pycuda_ops/cuda.py", line 35, in from cudart import * File "/home/ubgpu/github/hebel/hebel/pycuda_ops/cudart.py", line 60, in raise OSError('CUDA runtime library not found') OSError: CUDA runtime library not found

    opened by andyyuan78 1
  • need for windows x64 version

    need for windows x64 version

    when will you release the windows x64 project of this tool

    opened by caffeTao 1
  • Compatibility with CUDA 6.5 on OSX 10.10

    Compatibility with CUDA 6.5 on OSX 10.10

    These solve two issues with newer CUDA drivers on MacOSX. (in separate commits, with self-describing names.) There is still one test failure.

    opened by maparent 1
  • docs: fix simple typo, initalized -> initialized

    docs: fix simple typo, initalized -> initialized

    There is a small typo in hebel/layers/hidden_layer.py, hebel/layers/linear_regression_layer.py, hebel/layers/logistic_layer.py, hebel/layers/softmax_layer.py.

    Should read initialized rather than initalized.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • Missing memory_pool

    Missing memory_pool

    I have almost got it to work, but it is missing memory_pool. This is imported in the regression test.

    opened by ghost 0
  • h1

    h1

    opened by mohanrajmit 0
  • Small documentation enhancement request

    Small documentation enhancement request

    Hi there, I really appreciate Hebel. It was a good first step for me to "take the plunge" into using GPU.

    I struggled a bit after going through the example (MNIST) script. In particular, it wasn't clear how to have the model predict new data (i.e., data you don't have targets for).

    The first (small) stumble was what to with the DataProvider. I just put in dummy zero targets. Perhaps targets could be an optional field somehow?

    A more thorny issue was how to actually do the predictions. I couldn't for the life of me figure out how to feed the DataProvider data into the feed_forward without getting the error:

      File "/usr/local/lib/python2.7/dist-packages/hebel/models/neural_net.py", line 422, in feed_forward
        prediction=prediction))
      File "/usr/local/lib/python2.7/dist-packages/hebel/layers/input_dropout.py", line 96, in feed_forward
        return (input_data * (1 - self.dropout_probability),)
    TypeError: unsupported operand type(s) for *: 'MiniBatchDataProvider' and 'float'
    

    This was my original attempt:

    # After loading in the data . . .
    Xv = Xv.astype(np.float32)
    yv = pd.get_dummies(yv).values.astype(np.float32)
    valid_data = MiniBatchDataProvider(Xv, yv, batch_size=5000)
    

    I finally resorted to useing a gpu array which worked:

    from pycuda import gpuarray
    valid_data = gpuarray.to_gpu(Xt)
    y_pred = model.feed_forward(valid_data, return_cache=False, prediction=True).get()
    

    The .get() at the end of the last statement was also something I had to figure out going through code.

    Having an example in the documentation would be helpful.

    opened by walterreade 1
  • [HEP2] Implement Autoencoders

    [HEP2] Implement Autoencoders

    Hebel Enhancement Proposal 2

    Implement Autoencoders, including denoising autoencoders and contracting autoencoders.

    Hebel Enhancement Proposal 
    opened by hannes-brt 0
  • [HEP1] Convolutional neural nets

    [HEP1] Convolutional neural nets

    Hebel Enhancement Proposal 1:

    Wrap Alex Krizhevsky's cuda-convnet kernels (https://code.google.com/p/cuda-convnet/).

    Hebel Enhancement Proposal 
    opened by hannes-brt 0
Releases(v0.02.1)
Owner
Hannes Bretschneider
Postdoctoral Fellow in the Blencowe Lab at University of Toronto
Hannes Bretschneider
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 14 Aug 9, 2021
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Yue Zhao 63 Feb 1, 2022
GPU Accelerated Non-rigid ICP for surface registration

GPU Accelerated Non-rigid ICP for surface registration Introduction Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve

Haozhe Wu 64 Jan 17, 2022
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Katsuya Hyodo 23 Jul 17, 2021
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

null 495 Jan 18, 2022
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory >

tianyuluan 1 Jan 15, 2022
Accelerated deep learning R&D

Accelerated deep learning R&D PyTorch framework for Deep Learning research and development. It focuses on reproducibility, rapid experimentation, and

Catalyst-Team 2.8k Jan 24, 2022
meProp: Sparsified Back Propagation for Accelerated Deep Learning

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 105 Oct 25, 2021
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 105 Oct 25, 2021
A python comtrade load library accelerated by go

Comtrade-GRPC Code for python used is mainly from dparrini/python-comtrade. Just patch the code in BinaryDatReader.parse for parsing a little more eff

Bo 1 Dec 27, 2021
Deep Learning GPU Training System

DIGITS DIGITS (the Deep Learning GPU Training System) is a webapp for training deep learning models. The currently supported frameworks are: Caffe, To

NVIDIA Corporation 4.1k Jan 18, 2022
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 240 Jan 20, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.3k Jan 23, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
Numba-accelerated Pythonic implementation of MPDATA with examples in Python, Julia and Matlab

PyMPDATA PyMPDATA is a high-performance Numba-accelerated Pythonic implementation of the MPDATA algorithm of Smolarkiewicz et al. used in geophysical

Atmospheric Cloud Simulation Group @ Jagiellonian University 5 Dec 27, 2021
A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Manas Sharma 17 Jan 21, 2022
3D ResNet Video Classification accelerated by TensorRT

Activity Recognition TensorRT Perform video classification using 3D ResNets trained on Kinetics-400 dataset and accelerated with TensorRT P.S Click on

Akash James 33 Dec 14, 2021
Hardware accelerated, batchable and differentiable optimizers in JAX.

JAXopt Installation | Examples | References Hardware accelerated (GPU/TPU), batchable and differentiable optimizers in JAX. Installation JAXopt can be

Google 360 Jan 27, 2022