PyBrain - Another Python Machine Learning Library.

Related tags

Deep Learningpybrain
Overview
PyBrain -- the Python Machine Learning Library
===============================================


INSTALLATION
------------
Quick answer: make sure you have SciPy installed, then
	python setup.py install
	
Longer answer: (if the above was any trouble) we keep more
detailed installation instructions (including those
for the dependencies) up-to-date in a wiki at:
	http://wiki.github.com/pybrain/pybrain/installation


DOCUMENTATION
-------------
Please read
	docs/documentation.pdf
or browse
	docs/html/*	
featuring: quickstart, tutorials, API, etc.

If you have matplotlib, the scripts in
	examples/*
may be instructive as well.

Comments
  • python3.5.2

    python3.5.2

    Does pybrain support python3.5.2? The simple 'import pybrain ' abort. like below. I install it just with 'pip install pybrain' D:\Anaconda3.5.2\python.exe F:/gitProjects/vnpy_future/pre_code/cnn/rnn.py Traceback (most recent call last): File "F:/gitProjects/vnpy_future/pre_code/cnn/rnn.py", line 7, in import pybrain File "D:\Anaconda3.5.2\lib\site-packages\pybrain_init_.py", line 1, in from structure.init import * ImportError: No module named 'structure'

    opened by hhuhhu 4
  • Port most of the code to Python3 compatible.

    Port most of the code to Python3 compatible.

    The code should still work on Python2.

    Import and print are the main changes.

    Not everything might be ported. The most part that was left intact are the range(). In Python2 it returns a list and in python it returns an iterator. This should speed up and in most cases should work without further changes.

    See http://www.diveinto.org/python3/porting-code-to-python-3-with-2to3.html#xrange

    opened by wernight 4
  • PyPi package update

    PyPi package update

    There seem to have been many changes since 2009 (over 4 years ago). The version number on GitHub is almost the same yet it's probably worth making another release.

    PyPi allows installing simply for a user or system and other things. Not that git clone isn't good in many cases.

    opened by wernight 3
  • IndexError after recurrent network copy

    IndexError after recurrent network copy

    Steps:

    >>> from pybrain.tools.shortcuts import buildNetwork
    >>> net = buildNetwork(2, 4, 1, recurrent=True)
    >>> net.activate((1, 1))
    ...
    array([ 0.02202066])
    >>> net.copy()
    >>> net.activate((1, 1))
    ...
    IndexError: index out of bounds
    

    This seem to be only when recurrent=True.

    opened by wernight 3
  • KeyError in sortModules

    KeyError in sortModules

    I have an issue with the sortModules method throwing a KeyError.

    Following the tutorial example, I created a script with the following:

    #! /usr/bin/env python
    # -*- coding: utf-8 -*-
    
    import sys
    import scipy
    import numpy as np
    
    print "\nPython version: %s" % sys.version
    print "Numpy version: %s" % np.version.version
    print "Scipy version: %s" % scipy.version.version
    
    from pybrain.structure import FeedForwardNetwork
    from pybrain.structure import LinearLayer, SigmoidLayer
    from pybrain.structure import FullConnection
    
    # Create network
    nn = FeedForwardNetwork()
    
    # Set network parameters
    INPUT_NDS = 2
    HIDDEN_NDS = 3
    OUTPUT_NDS = 1
    
    # Create Feed Forward Network layers
    inLayer = LinearLayer(INPUT_NDS)
    hiddenLayer = SigmoidLayer(HIDDEN_NDS)
    outLayer = LinearLayer(OUTPUT_NDS)
    
    # Fully connect all layers
    in_to_hidden = FullConnection(inLayer, hiddenLayer)
    hidden_to_out = FullConnection(hiddenLayer, outLayer)
    
    # Add the connected layers to the network 
    nn.addConnection(in_to_hidden)
    nn.addConnection(hidden_to_out)
    
    # Sort modules to prepare the NN for use
    nn.sortModules()
    

    Which gives me: Python version: 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] Numpy version: 1.3.0 Scipy version: 0.7.0 Traceback (most recent call last): File "/tmp/py7317Q6c", line 46, in nn.sortModules() File "/usr/local/lib/python2.6/dist-packages/PyBrain-0.3-py2.6.egg/pybrain/structure/networks/network.py", line 224, in sortModules self._topologicalSort() File "/usr/local/lib/python2.6/dist-packages/PyBrain-0.3-py2.6.egg/pybrain/structure/networks/network.py", line 188, in _topologicalSort graph[c.inmod].append(c.outmod) KeyError: <LinearLayer 'LinearLayer-3'>

    I have the latest version of pybrain installed, so this seems strange. Especially as I when use the shortcut:

    from pybrain.tools.shortcuts import buildNetwork
    
    opened by ghost 3
  • serialization using pickle freezes network causes strange caching behaviour

    serialization using pickle freezes network causes strange caching behaviour

    This is a duplicate of my Stackoverflow.com question.

    I fail to properly serialize/deserialize PyBrain networks using either pickle or cPickle.

    See the following example:

    from pybrain.datasets            import SupervisedDataSet
    from pybrain.tools.shortcuts     import buildNetwork
    from pybrain.supervised.trainers import BackpropTrainer
    import cPickle as pickle
    import numpy as np 
    
    #generate some data
    np.random.seed(93939393)
    data = SupervisedDataSet(2, 1)
    for x in xrange(10):
        y = x * 3
        z = x + y + 0.2 * np.random.randn()  
        data.addSample((x, y), (z,))
    
    #build a network and train it    
    
    net1 = buildNetwork( data.indim, 2, data.outdim )
    trainer1 = BackpropTrainer(net1, dataset=data, verbose=True)
    for i in xrange(4):
        trainer1.trainEpochs(1)
        print '\tvalue after %d epochs: %.2f'%(i, net1.activate((1, 4))[0])
    

    This is the output of the above code:

    Total error: 201.501998476
        value after 0 epochs: 2.79
    Total error: 152.487616382
        value after 1 epochs: 5.44
    Total error: 120.48092561
        value after 2 epochs: 7.56
    Total error: 97.9884043452
        value after 3 epochs: 8.41
    

    As you can see, network total error decreases as the training progresses. You can also see that the predicted value approaches the expected value of 12.

    Now we will do a similar exercise, but will include serialization/deserialization:

    print 'creating net2'
    net2 = buildNetwork(data.indim, 2, data.outdim)
    trainer2 = BackpropTrainer(net2, dataset=data, verbose=True)
    trainer2.trainEpochs(1)
    print '\tvalue after %d epochs: %.2f'%(1, net2.activate((1, 4))[0])
    
    #So far, so good. Let's test pickle
    pickle.dump(net2, open('testNetwork.dump', 'w'))
    net2 = pickle.load(open('testNetwork.dump'))
    trainer2 = BackpropTrainer(net2, dataset=data, verbose=True)
    print 'loaded net2 using pickle, continue training'
    for i in xrange(1, 4):
            trainer2.trainEpochs(1)
            print '\tvalue after %d epochs: %.2f'%(i, net2.activate((1, 4))[0])
    

    This is the output of this block:

    creating net2
    Total error: 176.339378639
        value after 1 epochs: 5.45
    loaded net2 using pickle, continue training
    Total error: 123.392181859
        value after 1 epochs: 5.45
    Total error: 94.2867637623
        value after 2 epochs: 5.45
    Total error: 78.076711114
        value after 3 epochs: 5.45
    

    As you can see, it seems that the training has some effect on the network (the reported total error value continues to decrease), however the output value of the network freezes on a value that was relevant for the first training iteration.

    Is there any caching mechanism that I need to be aware of that causes this erroneous behaviour? Are there better ways to serialize/deserialize pybrain networks?

    Relevant version numbers:

    • Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)]
    • Numpy 1.5.1
    • cPickle 1.71
    • pybrain 0.3
    0.4 
    opened by bgbg 3
  • Hierarchy change: take Black-box optimization out of RL

    Hierarchy change: take Black-box optimization out of RL

    Although it technically fits there, it is a bit confusing. I think the split should be along the difference of ontogenetic/phylogenetic with on one side optimization, evolution, pso, etc. (coevolution methods should fit here, but how about multi-objective optimization?) and on the other side policy gradients, and other RL algos.

    0.3 Discussion In progress 
    opened by schaul 3
  • splitWithProportion returns same type instead of SupervisedDataSet

    splitWithProportion returns same type instead of SupervisedDataSet

    When we call splitWithProportion on a ClassificationDataSet object, return type is (SupervisedDataSet, SupervisedDataSet) instead of (ClassificationDataSet, ClassificationDataSet). While this modification fixes this issue, it can be improved by calling the constructor using kwargs. Didn't modify sub-classes in order to prevent repetition of lines 106-112. I've done this modification because when we split a sub-class of SupervisedDataSet, we should get a 2-tuple of sub-class object. Not a 2-tuple of SupervisedDataSet.

    opened by borakrc 2
  • ImportanceDataSet with BackpropTrainer results in IndexError

    ImportanceDataSet with BackpropTrainer results in IndexError

    I have a dataset which I am clustering using a gaussian mixture model, and then I want to train a neural network for each of the clusters. I want to use all the points in my dataset weighted based on the probability they are in the cluster for which the net is being trained.

    Originally, I was not weighting the training data and it worked fine:

    '''
    Create and train a neural net on the training data, given the actual labels
    '''
    def create_neural_net(training, labels, weights=None, T=10, silent=False):
        input_units = len(training[0])
        output_units = len(labels[0])
        n = len(training)
    
        net = FeedForwardNetwork()
        layer_in = SoftmaxLayer(input_units)
        layer_hidden = SigmoidLayer(1000)
        layer_hidden2 = SigmoidLayer(50)
        layer_out = LinearLayer(output_units)
    
        net.addInputModule(layer_in)
        net.addModule(layer_hidden)
        net.addModule(layer_hidden2)
        net.addOutputModule(layer_out)
    
        net.addConnection(FullConnection(layer_in, layer_hidden))
        net.addConnection(FullConnection(layer_hidden, layer_hidden2))
        net.addConnection(FullConnection(layer_hidden2, layer_out))
    
        net.sortModules()
    
        training_data = SupervisedDataSet(input_units, output_units)
        for i in xrange(n):
            # print len(training[i]) # prints 148
            # print len(labels[i]) # prints 13
            training_data.appendLinked(training[i], labels[i])
        trainer = BackpropTrainer(net, training_data)
    
        for i in xrange(T):
            if not silent: print "Training %d" % (i + 1)
            error = trainer.train()
            if not silent: print net.activate(training[0]), labels[0]
            if not silent: print "Training iteration %d.  Error: %f." % (i + 1, error)
        return net
    

    But now when I try to weight the data points:

    '''
    Create and train a neural net on the training data, given the actual labels
    '''
    def create_neural_net(training, labels, weights=None, T=10, silent=False):
        input_units = len(training[0])
        output_units = len(labels[0])
        n = len(training)
    
        net = FeedForwardNetwork()
        layer_in = SoftmaxLayer(input_units)
        layer_hidden = SigmoidLayer(1000)
        layer_hidden2 = SigmoidLayer(50)
        layer_out = LinearLayer(output_units)
    
        net.addInputModule(layer_in)
        net.addModule(layer_hidden)
        net.addModule(layer_hidden2)
        net.addOutputModule(layer_out)
    
        net.addConnection(FullConnection(layer_in, layer_hidden))
        net.addConnection(FullConnection(layer_hidden, layer_hidden2))
        net.addConnection(FullConnection(layer_hidden2, layer_out))
    
        net.sortModules()
    
        training_data = ImportanceDataSet(input_units, output_units)
        for i in xrange(n):
            # print len(training[i]) # prints 148
            # print len(labels[i]) # prints 13
            training_data.addSample(training[i], labels[i], importance=(weights[i] if weights is not None else None))
        trainer = BackpropTrainer(net, training_data)
    
        for i in xrange(T):
            if not silent: print "Training %d" % (i + 1)
            error = trainer.train()
            if not silent: print net.activate(training[0]), labels[0]
            if not silent: print "Training iteration %d.  Error: %f." % (i + 1, error)
        return net
    

    I get the following error:

    Traceback (most recent call last):
      File "clustering_experiment.py", line 281, in <module>
        total_model = get_model(training, training_labels, num_clusters=NUM_CLUSTERS
    , T=NUM_ITERS_NEURAL_NET)
      File "clustering_experiment.py", line 177, in get_model
        neural_nets.append(neural_net_plugin.create_neural_net(tra.tolist(), val.tol
    ist(), T=T, silent=True))
      File "/home/neural_net_plugin.py", line 43, in create_neural_net
        error = trainer.train()
      File "/usr/local/lib/python2.7/dist-packages/PyBrain-0.3.1-py2.7.egg/pybrain/s
    upervised/trainers/backprop.py", line 61, in train
        e, p = self._calcDerivs(seq)
      File "/usr/local/lib/python2.7/dist-packages/PyBrain-0.3.1-py2.7.egg/pybrain/s
    upervised/trainers/backprop.py", line 92, in _calcDerivs
        outerr = target - self.module.outputbuffer[offset]
    IndexError: index 162 is out of bounds for axis 0 with size 1
    
    opened by kkleidal 2
  • Fixes to Python3.x

    Fixes to Python3.x

    Changes

    All changes I've done were backported from Python3 to Python2 (at least until Python2.7).

    TODO

    • I didn't change the files containing weave library. In fact, I really don't know if this library is already supported in latest scipy versions. I couldn't find any recent references to that. Just found "old" news saying that it is not supported yet such as this and this. Maybe it's time to consider using Cython instead.
    • RL-Glue imports are also unchanged because its current Python codec have no support for Py3 yet. However, I changed Python codec source for RL-Glue to run in Py2 and Py3 (in fact, I just changed minor things such as print function and exception statements). By the way, if you guys want to try it, I've uploaded on my Github. Another thing to point out is that no one is maintaining RL-Glue code anymore.

    I didn't do any tests and I just tried to run the examples in the Pybrain docs and everything worked fine.

    opened by herodrigues 2
  • Add Randlov bicycle RL example.

    Add Randlov bicycle RL example.

    I have written part of the RL bicycle problem introduced by Randlov and Alstrom as an example in PyBrain. Hopefully you all would like to include it in PyBrain!

    Here's their paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.3038&rep=rep1&type=pdf

    I include some plotting, so you can view the learning.

    Please let me know what improvements I should make.

    opened by chrisdembia 2
  • cannot import name 'random' from 'scipy'

    cannot import name 'random' from 'scipy'

    I am using scipy 1.9.1 and I get the traceback below when using the buildNetwork function.

    Traceback (most recent call last): File "/home/nono/Desktop/tmp/neural/./main.py", line 3, in <module> from pybrain.tools.shortcuts import buildNetwork File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/__init__.py", line 1, in <module> from pybrain.structure.__init__ import * File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/__init__.py", line 2, in <module> from pybrain.structure.modules.__init__ import * File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/modules/__init__.py", line 3, in <module> from pybrain.structure.modules.gaussianlayer import GaussianLayer File "/usr/local/lib/python3.10/dist-packages/PyBrain-0.3.3-py3.10.egg/pybrain/structure/modules/gaussianlayer.py", line 3, in <module> from scipy import random ImportError: cannot import name 'random' from 'scipy' (/usr/local/lib/python3.10/dist-packages/scipy-1.9.1-py3.10-linux-x86_64.egg/scipy/__init__.py) Looks like an old reference where things have changed in scipy and not updated in pybrain probably.

    Is pybrain still maintained? The last release is from 2015.

    opened by noeldum 0
  • library with this error

    library with this error

    this error is being presented when I use this pybrain library

    this is my code: from pybrain.structure import FeedForwardNetwork from pybrain.structure import LinearLayer, SigmoidLayer, BiasUnit from pybrain.structure import FullConnection

    rneural = FeedForwardNetwork()

    CE = LinearLayer(4) CO = SigmoidLayer(6) CS = SigmoidLayer(1) b1 = BiasUnit() b2 = BiasUnit()

    rneural.addModule(CE) rneural.addModule(CO) rneural.addModule(CS) rneural.addModule(b1) rneural.addModule(b2)

    EO = FullConnection(CE, CO) OS = FullConnection(CO, CS) bO = FullConnection(b1, CO) bS = FullConnection(b2, CS)

    rneural.sortModule() print(rneural)

    when I run:

    python3 rneural.py Traceback (most recent call last): File "/home/warwick/Desktop/scriptsinpython/ai/rneural.py", line 1, in from pybrain.structure import FeedForwardNetwork File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/init.py", line 1, in from pybrain.structure.init import * File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/init.py", line 2, in from pybrain.structure.modules.init import * File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/modules/init.py", line 2, in from pybrain.structure.modules.gate import GateLayer, DoubleGateLayer, MultiplicationLayer, SwitchLayer File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/structure/modules/gate.py", line 10, in from pybrain.tools.functions import sigmoid, sigmoidPrime File "/home/warwick/environments/my_env/lib/python3.10/site-packages/pybrain/tools/functions.py", line 4, in from scipy.linalg import inv, det, svd, logm, expm2 ImportError: cannot import name 'expm2' from 'scipy.linalg' (/home/warwick/environments/my_env/lib/python3.10/site-packages/scipy/linalg/init.py)

    I've tried several solutions but the only one I haven't tried is to downgrade python3.10, I think it's not the most correct solution if anyone knows how to fix this

    thanks

    opened by Ickwarw 1
  • docs: Fix a few typos

    docs: Fix a few typos

    There are small typos in:

    • pybrain/rl/environments/flexcube/viewer.py
    • pybrain/rl/environments/ode/tasks/ccrl.py
    • pybrain/rl/environments/ode/tasks/johnnie.py
    • pybrain/rl/environments/shipsteer/viewer.py
    • pybrain/structure/modules/lstm.py
    • pybrain/tests/runtests.py
    • pybrain/tools/rlgluebridge.py

    Fixes:

    • Should read suggested rather than suggestet.
    • Should read specific rather than spezific.
    • Should read height rather than hight.
    • Should read whether rather than wether.
    • Should read method rather than methode.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • Pybrain: 'SupervisedDataSet' object has no attribute '_convertToOneOfMany' error

    Pybrain: 'SupervisedDataSet' object has no attribute '_convertToOneOfMany' error

    I'm working on speech recognition using raspberry pi while I was running the code of the build model using pybrain features I got the error:'SupervisedDataSet' object has no attribute '_convertToOneOfMany' ? If anyone has any pointers to get me back on the right path and that would be very much appreciated. ` def createRGBdataSet(inputSet, numOfSamples, numOfPoints): alldata = ClassificationDataSet(numOfPoints, 1, nb_classes=3) # Iter through all 3 groups and add the samples with appropriate class label for i in range(0, 3numOfSamples): input = inputSet[i] if (i < numOfSamples): alldata.addSample(input, [0]) elif (i >= numOfSamples and i < numOfSamples2): alldata.addSample(input, [1]) else: alldata.addSample(input, [2]) return alldata

    Split the dataset into 75% training and 25% test data.

    def splitData(alldata): tstdata, trndata = alldata.splitWithProportion( 0.25 ) trndata._convertToOneOfMany() tstdata._convertToOneOfMany() return trndata, tstdata `

    opened by ghost 0
  • I am having a problem with my code, please help!

    I am having a problem with my code, please help!

    I'm working on speech recognition using raspberry pi while I was running the code of the build model using pybrain features I got the error:'SupervisedDataSet' object has no attribute '_convertToOneOfMany' ? If anyone has any pointers to get me back on the right path and that would be very much appreciated.

    def createRGBdataSet(inputSet, numOfSamples, numOfPoints):
        alldata = ClassificationDataSet(numOfPoints, 1, nb_classes=3)
        # Iter through all 3 groups and add the samples with appropriate class label
        for i in range(0, 3*numOfSamples):
            input = inputSet[i]
            if (i < numOfSamples):
                alldata.addSample(input, [0])
            elif (i >= numOfSamples and i < numOfSamples*2):
                alldata.addSample(input, [1])
            else:
                alldata.addSample(input, [2])
        return alldata
    
    
    # Split the dataset into 75% training and 25% test data.
    def splitData(alldata):
        tstdata, trndata = alldata.splitWithProportion( 0.25 )
        trndata._convertToOneOfMany()
        tstdata._convertToOneOfMany()
        return trndata, tstdata
    
    opened by ghost 0
Releases(0.3.3)
Supporting code for short YouTube series Neural Networks Demystified.

Neural Networks Demystified Supporting iPython notebooks for the YouTube Series Neural Networks Demystified. I've included formulas, code, and the tex

Stephen 1.3k Dec 23, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
The pytorch implementation of DG-Font: Deformable Generative Networks for Unsupervised Font Generation

DG-Font: Deformable Generative Networks for Unsupervised Font Generation The source code for 'DG-Font: Deformable Generative Networks for Unsupervised

130 Dec 05, 2022
Training code and evaluation benchmarks for the "Self-Supervised Policy Adaptation during Deployment" paper.

Self-Supervised Policy Adaptation during Deployment PyTorch implementation of PAD and evaluation benchmarks from Self-Supervised Policy Adaptation dur

Nicklas Hansen 101 Nov 01, 2022
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 09, 2022
Toward Multimodal Image-to-Image Translation

BicycleGAN Project Page | Paper | Video Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our

Jun-Yan Zhu 1.4k Dec 22, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 05, 2022
Code for Overinterpretation paper Overinterpretation reveals image classification model pathologies

Overinterpretation This repository contains the code for the paper: Overinterpretation reveals image classification model pathologies Authors: Brandon

Gifford Lab, MIT CSAIL 17 Dec 10, 2022
Robotics environments

Robotics environments Details and documentation on these robotics environments are available in OpenAI's blog post and the accompanying technical repo

Farama Foundation 121 Dec 28, 2022
Official repository of "BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment"

BasicVSR_PlusPlus (CVPR 2022) [Paper] [Project Page] [Code] This is the official repository for BasicVSR++. Please feel free to raise issue related to

Kelvin C.K. Chan 227 Jan 01, 2023
Official implement of "CAT: Cross Attention in Vision Transformer".

CAT: Cross Attention in Vision Transformer This is official implement of "CAT: Cross Attention in Vision Transformer". Abstract Since Transformer has

100 Dec 15, 2022
Speech-Emotion-Analyzer - The neural network model is capable of detecting five different male/female emotions from audio speeches. (Deep Learning, NLP, Python)

Speech Emotion Analyzer The idea behind creating this project was to build a machine learning model that could detect emotions from the speech we have

Mitesh Puthran 965 Dec 24, 2022
A light weight data augmentation tool for training CNNs and Viola Jones detectors

hey-daug A light weight data augmentation tool for training CNNs and Viola Jones detectors (Haar Cascades). This tool inflates your data by up to six

Jaiyam Sharma 2 Nov 23, 2019
AdelaiDepth is an open source toolbox for monocular depth prediction.

AdelaiDepth is an open source toolbox for monocular depth prediction.

Adelaide Intelligent Machines (AIM) Group 743 Jan 01, 2023
EZ graph is an easy to use AI solution that allows you to make and train your neural networks without a single line of code.

EZ-Graph EZ Graph is a GUI that allows users to make and train neural networks without writing a single line of code. Requirements python 3 pandas num

1 Jul 03, 2022
Semantic Image Synthesis with SPADE

Semantic Image Synthesis with SPADE New implementation available at imaginaire repository We have a reimplementation of the SPADE method that is more

NVIDIA Research Projects 7.3k Jan 07, 2023
Deep Two-View Structure-from-Motion Revisited

Deep Two-View Structure-from-Motion Revisited This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

Jianyuan Wang 145 Jan 06, 2023
Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks

Hidden-Fold Networks (HFN): Random Recurrent Residuals Using Sparse Supermasks by Ángel López García-Arias, Masanori Hashimoto, Masato Motomura, and J

Ángel López García-Arias 4 May 19, 2022
CT-Net: Channel Tensorization Network for Video Classification

[ICLR2021] CT-Net: Channel Tensorization Network for Video Classification @inproceedings{ li2021ctnet, title={{\{}CT{\}}-Net: Channel Tensorization Ne

33 Nov 15, 2022