Skip to content

abel-gr/AbelNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AbelNN: An easy-to-use deep learning framework with AutoML

I have implemented several neural networks from scratch using only Numpy, to create my own deep learning library with the main objective of making it easy to use for any user, even those who have never had contact with machine learning.

To that end, I have developed an Auto Machine Learning algorithm that allows neural networks to be tuned automatically, so that it is not necessary to specify hyperparameters in case it is not desired. Of course, and with more experienced users in mind as well, my library can be used in multiple ways as described on this page.

In this file you will find general information about my module. If you need more information, you can read the documentation of the different classes in the Documentation folder. There you will also find detailed information on all classes, methods, procedures, and variables. You also have usage examples in the Examples folder.

The library code, to be able to import it into your project, is found in the Code folder.

You can also read my paper in the Paper.pdf file.

Table of Contents

How to use my module

My library can be used in 3 different ways, and you can choose depending on your knowledge of Deep Learning.

For inexperienced users

If you have no experience in Deep Learning you can choose to use the AutoML class. You will not have to specify any parameters for any of my neural networks. You only have to indicate your training and test data.

Example: Training a convolutional neural network without specifying any hyperparameters, using the AutoML module to find the best model for the data:

import AbelNN

# Instance a default model.
c = AbelNN.ConvNet()
    
# Pass the ConvNet to the AutoML. It returns the best model for your data.
clf = AbelNN.AutoML(c).fit(x_train, y_train, x_test, y_test)

# Train the auto tuned model.
clf.fit(x_train, y_train)

# Use the model to predict your test data.
clf.predict(x_test, y_test)

In all the cases executed, my AutoML algorithm is capable of obtaining better results than Random Search and Grid Search in less time:

My AutoML is able to find really good models very quickly, as you can see in this example.


To see detailed information and more examples, check my AutoML documentation.


For experienced users

If you are a bit more experienced, you can also specify the value of all neural network hyperparameters using my library. And if you have never tuned a deep learning model but want to get started, do not worry, I have designed my module to be extremely easy to use even by people who have never used deep learning. In either case, if what you want is to modify the networks without complicating yourself by defining complex architectures, you can use my predefined classes. They are so easy to use that with a single line of code you can completely customize your model.

Example: This example instantiates a convolutional neural network with 2 convolutional layers of 64 filters each, the first with stride equal to 2 and the second equal to 1, and filters of size 5 in all dimensions of the data in the first layer and 3 in the second layer. After the convolutional layers it has 2 fullyconnected hidden layers, one with 30 neurons and the other with 20. The learning rate is 0.1.

from ConvNetAbel import *

clf = ConvNetAbel(convFilters=[64,64], convStride=[2,1], convFilterSizes=[5,3], hidden=[30,20], learningRate=0.1)

clf.fit(X_train, y_train)

probabs = clf.predict_proba(X_test)

The fit function trains your model with your training data, and it automatically determines the number of neurons needed for the input layer and the output layer from the data, thus hiding actions that could overwhelm you at first. For this reason, I have also defined, among other procedures, an internal convolution function that adapts perfectly to the input data, automatically calculating the necessary size of the filters and of each of the outputs of the convolutional layers, to abstract you even more if you do not want to get into those concepts yet. This allows you to use my convolutional neural network class with just the 3 lines above (and the import).

The predict_proba function returns the estimated probability (whose sum is not equal to 1) of the prediction made for each of the classes in your problem.


To see detailed information and more examples, check the documentation of my different networks.


For advanced users

If you are an advanced user of neural networks, you may want to choose to directly use the internal functions that I have implemented from scratch in my neural networks.

Example of chaining convolutions: To a 28x28 image, apply convolutions with 32 random filters of 3x3 size. Convolve the result with 64 filters of size 3x3, and convolve that result with 128 filters of size 3x3. All this using a stride value of 2 and ReLU as activation function:

from ConvNetAbel import *
import numpy as np

random_image = np.random.uniform(low=0.0, high=1.0, size=(28,28))
print('Image shape:', random_image.shape)

CNN = ConvNetAbel()

filters1 = np.random.uniform(low=0.0, high=1.0, size=(3,3,32))
em = CNN.conv_filters(random_image, filters1, relu=True, stride=2)
print('First convolutional layer output:', em.shape)

filters2 = np.random.uniform(low=0.0, high=1.0, size=(3,3,64))
em = CNN.conv_filters(em, filters2, relu=True, stride=2)
print('Second convolutional layer output:', em.shape)

filters3 = np.random.uniform(low=0.0, high=1.0, size=(3,3,128))
em = CNN.conv_filters(em, filters3, relu=True, stride=2)
print('Third convolutional layer output:', em.shape)

The example above prints the shape of the array as it goes through the convolutional layers to the end:

Image shape: (28, 28)
First convolutional layer output: (14, 14, 32)
Second convolutional layer output: (7, 7, 64)
Third convolutional layer output: (4, 4, 128)

In the documentation of the classes you will find the description of all the functions and procedures with their parameters, their outputs and their functionality described, as well as information about all the class variables.

Classes available

Now that you have chosen how you want to use my network, you can take a look at the specific documentation for each class if you need it.

The following table allows you to go directly to the implemented code, to examples of use and to the documentation of each class in my library.

Implementation Class Code Examples Documentation
Multilayer perceptron MLP_Abel Click here Click here Click here
Convolutional neural network ConvNetAbel Click here Click here Click here
Auto Machine Learning AutoML_Abel Click here - With MLP
- With CNN
Click here
Main class AbelNN Click here Click here (This page)

All algorithm classes are independent and you can directly import each file separately. However, you can also import the AbelNN.py file which imports everything.

Classification and regression

My library has been used for both classification and regression tasks, with different types of datasets, obtaining excellent results. Moreover, when it comes to classifying images, both my predefined MLP and CNN classes also achieve remarkable test metrics, as you can see in the examples folder.

Plot training error

If you want to see the error that the network makes during training to analyze its learning, you can use the procedure plot_mean_error_last_layer. You must pass the parameter debugLevel=1 or higher in the constructor of the MLP or CNN classes to be able to use this procedure, since with the debugging level equal to 0 the error data is not stored.

Example: The following example plots the mean error during training in each epoch, for each of the 10 neurons in the output layer. The labels list contains the name of our classes. As it is an example of the MNIST digit images, we put the different digits:

# Import the MLP class
# (you could also import the entire AbelNN file as in other examples):
from MLP_Abel import *

# Instance a predefined multilayer perceptron
clf = MLP_Abel(hidden=[20, 30], learningRate=0.001, debugLevel=1)

# Train the network
clf.fit(x_train, y_train_multiclass)

# Name your clases:
labels = ['0','1','2','3','4','5','6','7','8','9']

# Plot the mean error during training:
clf.plot_mean_error_last_layer(labels, byClass=True)

Output:

If we call the same plot_mean_error_last_layer procedure as in the previous example, but setting the variable byClass to False, it will show the same plot but averaging the error among all neurons in the output layer:

clf.plot_mean_error_last_layer(labels, byClass=False)

Output:

Drawing fullyconnected layers

With the draw procedure, you can show a matplotlib plot with all the layers of your multilayer perceptron or the fullyconnected layers of your ConvNet:

import AbelNN
import numpy as np

mlp = AbelNN.MLP(hidden=[3,5,7])

# Random data as an example
x_train = np.random.uniform(low=0.0, high=1.0, size=(100, 5))
y_train = np.random.uniform(low=0.0, high=1.0, size=(100, 2))

mlp.fit(x_train, y_train)

mlp.draw()

Output:

You can also indicate if you want to show the value of the weights, as well as hide the legend or change the size of the text and the radius of the neurons:

import AbelNN
import numpy as np

mlp = AbelNN.MLP(hidden=[2])

# Random data as an example
x_train = np.random.uniform(low=0.0, high=1.0, size=(100, 3))
y_train = np.random.uniform(low=0.0, high=1.0, size=(100, 2))

mlp.fit(x_train, y_train)

mlp.draw(showWeights=True, textSize=11, customRadius=0.02, showLegend=False)

Output:

And of course you can also plot the layers of the fully-connected part of your convolutional neural network, with or without the legend:

import AbelNN
import numpy as np

cnn = AbelNN.ConvNet(hidden=[2,3], convFilters=[16, 16], convStride=[3, 3])

# Random data as an example
x_train = np.random.uniform(low=0.0, high=1.0, size=(100, 10, 10)) # 100 10x10 random images
y_train = np.random.uniform(low=0.0, high=1.0, size=(100, 3))

cnn.fit(x_train, y_train)

cnn.draw()

Output:

In the examples folder you have several Jupyter Notebooks in which real problems are solved with my library, and complete examples with the different functions and procedures called to train and predict using my neural networks.

Import and export your models

All the classes in my library that implement predefined neural networks allow you to import and export your models very easily and quickly.

Export a model

You can export the network and its variables to a disk file so that it persists after execution and can be used again, even without the need to re-train. It performs the export using the Numpy load and save procedures, and generates various npy files with the content of the network variables. The filename parameter is used for all files during that export as the name prefix.

The export can be done by calling the exportModel procedure of a neural network class:

exportModel(self, path='', filename='model')

  • Parameters:

    • path: (Type string, default = '') Path where the model will be exported.

    • filename: (Type string, default = 'model') Prefix of the files to be generated.

  • Returns: None

Example of exporting a multilayer perceptron:

import AbelNN

mpath = 'exportedModels/'
mfilename = 'my_MLP_model'

clf = AbelNN.MLP(hidden=[2,3])
clf.exportModel(mpath, mfilename)

Example of exporting a convolutional neural network:

import AbelNN

mpath = 'exportedModels/'
mfilename = 'my_ConvNet_Model'

clf = AbelNN.ConvNet(convFilters=[8, 16], hidden=[2,3])
clf.exportModel(mpath, mfilename)

Import a model

You can import a model of an AbelNN neural network class from a disk file, which will keep all the variables of the instance that was exported.

The files that are used for the import are the npy files that you previously exported, or that someone else shared with you, but that were generated using the exportModel procedure.

The import can be done by calling the importModel procedure of a neural network class. The instance that makes that call will have the same value in all variables as the instance that was exported to those files.

importModel(self, path='', filename='model')

  • Parameters:

    • path: (Type string, default = '') Path where the exported model files are located.

    • filename: (Type string, default = 'model') Prefix of the exported files.

  • Returns: None

Example of importing a multilayer perceptron:

import AbelNN

mpath = 'exportedModels/'
mfilename = 'my_MLP_model'

clf = AbelNN.MLP() # Instance a default MLP
clf.importModel(mpath, mfilename) # The variables will be loaded from the file.

Example of importing a convolutional neural network:

import AbelNN

mpath = 'exportedModels/'
mfilename = 'my_ConvNet_Model'

clf = AbelNN.ConvNet() # Instance a default ConvNet
clf.importModel(mpath, mfilename) # The variables will be loaded from the file.

Library dependencies

  • copy
  • math
  • matplotlib
  • numpy
  • random