ConformalLayers: A non-linear sequential neural network with associative layers

Overview

ConformalLayers: A non-linear sequential neural network with associative layers

ConformalLayers is a conformal embedding of sequential layers of Convolutional Neural Networks (CNNs) that allows associativity between operations like convolution, average pooling, dropout, flattening, padding, dilation, and stride. Such embedding allows associativity between layers of CNNs, considerably reducing the amount of operations to perform inference in type of neural networks.

This repository is a implementation of ConformalLayers written in Python using Minkowski Engine and PyTorch as backend. This implementation is a first step into the usage of activation functions, like ReSPro, that can be represented as tensors, depending on the geometry model.

Please cite our SIBGRAPI'21 paper if you use this code in your research. The paper presents a complete description of the library:

@InProceedings{sousa_et_al-sibgrapi-2021,
  author    = {Sousa, Eduardo V. and Fernandes, Leandro A. F. and Vasconcelos, Cristina N.},
  title     = {{C}onformal{L}ayers: a non-linear sequential neural network with associative layers},
  booktitle = {Proceedings of the 2021 34th SIBGRAPI Conference on Graphics, Patterns and Images},
  year      = {2021},
}

Please, let Eduardo Vera Sousa (http://www.ic.uff.br/~eduardovera), Leandro A. F. Fernandes (http://www.ic.uff.br/~laffernandes) and Cristina Nader Vasconcelos(http://www2.ic.uff.br/~crisnv/index.php) know if you want to contribute to this project. Also, do not hesitate to contact them if you encounter any problems.

Contents:

  1. Requirements
  2. How to Install ConformalLayers
  3. Running Examples
  4. Compiling and Running Unit Tests
  5. Documentation
  6. License

1. Requirements

Make sure that you have the following tools before attempting to use ConformalLayers.

Required tools:

Optional tool to use ConformalLayers:

  • Virtual enviroment to create an isolated workspace for a Python application.

  • Docker to create a container to run ConformalLayers

2. How to Install ConformalLayers

No magic needed here. Just run:

python setup.py install

3. Running Examples

The basic steps for running the examples of ConformalLayers look like this:

cd <ConformalLayers-dir>/Experiments/<experiment-name>

For Experiments I and II, each file refers to the experiment described on the main paper. Thus, in order to run BaseReSProNet with FashionMNIST dataset, for example, all you have to do is:

python BaseReSProNet.py --dataset=FashionMNIST

The values that can be used for the dataset argument are

  • MNIST
  • FashionMNIST
  • CIFAR10

The loader of each dataset is described in Experiments/utils/datasets.py file.

Other arguments for the script files in Experiments I and II are:

  • epochs (int value)
  • batch_size (int value)
  • learning_rate (float value)
  • optimizer (adam or rmsprop)
  • dropout (float value)

For Experiments III and IV, since we compute the amount of memory used, we used an external file to orchestrate the calls and make sure we have a clean environment for the next iterations. Such orchestrator is writen on the files with the suffix _manager.py.

You can also run the files that corresponds to each architecture individually, without the orchestrator. To run D3ModNetCL architecture, for example, just run

python D3ModNetCL.py

The arguments for the non-orchestrated scripts in Experiments III and IV are:

  • num_inferences (int value)
  • batch_size (int value)
  • depth (int value, Experiment III only)

The files in networks folder contains the description of each architecture used in our experiments and presents the usage of the classes and methods of our library.

4. Running Unit Tests

The basic steps for running the unit tests of ConformalLayers look like this:

cd <ConformalLayers-dir>/tests

To run all tests, simply run

python test_all.py

To run the tests for each module, run:

python test_<module_name>.py

5. Documentation

Here you find a brief description of the namespaces, macros, classes, functions, procedures, and operators available for the user. All methods are available with C++ and most of them with Python. The detailed documentation is not ready yet.

Contents:

Modules

Here we present the main modules implemented in our framework. Most of the modules are used just like in PyTorch, so users with some background on this framework benefits from this implementation. For users not familiar with PyTorch, the usage still quite simple and intuitive.

Module Description
Conv1d, Conv2d, Conv3d, ConvNd Convolution operation implemented for n-D signals
AvgPool1d, AvgPool2d, AvgPool3d, AvgPoolNd Average pooling operation implemented for n-D signals
BaseActivation The abstract class for the activation function layer. To extend the library, one shall implement this class
ReSPro The layer that corresponds to the ReSPro activation function. Such function is a linear function with non-linear behavior that can be encoded as a tensor. The non-linearity of this function is controlled by a parameter α (passed as argument) that can be provided or inferred from the data
Regularization In this version, Dropout is only regularization available. In this approach, during the training phase, we randomly shut down some neurons with a probability p, passed as argument to this module

These modules are composed into ConformalLayers in a very similar way to the pure PyTorch-based way. The class ConformalLayers plays an important role in this task, as you can see by comparing the code snippets below:

# This one is built with pure PyTorch
import torch.nn as nn

class D3ModNet(nn.Module):
    def __init__(self):
        super(D3ModNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x
# This one is built with ConformalLayers
import torch.nn as nn
import ConformalLayers as cl

class D3ModNetCL(nn.Module):
    def __init__(self):
        super(D3ModNetCL, self).__init__()
        self.features = cl.ConformalLayers(
            cl.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x

They look pretty much the same code, right? That's because we've implemented ConformalLayers to be a transition smoothest as possible to the PyTorch user. Most of the modules has almost the same method signatures of the ones provided by PyTorch.

Convolution

The convolution operation implemented in ConformalLayers on the modules ConvNd, Conv1d, Conv2d and Conv3d is almost the same one implemented on PyTorch but we do not allow bias. This is mostly due to the construction of our logic when building the representation with tensors. Although we have a few ideas on how to include bias on this representation, they are not included in the current version. The parameters are detailed below and are originally available in PyTorch convolution documentation page. The exception here relies on the padding_mode parameter, that is always set to 'zeros' in our implementation.

  • in_channels (int) – Number of channels in the input image

  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int, tuple or str, optional) – Padding added to both sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

Pooling

In our current implementation, we only support average pooling, which is implemented on modules AvgPoolNd, AvgPool1d, AvgPool2d and AvgPool3d. The parameters list, originally available in PyTorch average pooling documentation page, is described below:

  • kernel_size – the size of the window

  • stride – the stride of the window. Default value is kernel_size

  • padding – implicit zero padding to be added on both sides

  • ceil_mode – when True, will use ceil instead of floor to compute the output shape

  • count_include_pad – when True, will include the zero-padding in the averaging calculation

Activation

Our activation module has ReSPro activation function implemented natively. By using Reflections, Scalings and Projections on an hypersphere in higher dimensions, we created a non-linear differentiable associative activation function that can be represented in tensor form. It has only one parameter, that controls how close to linear or non-linear is the curve. More details are available on the main paper.

  • alpha (float, optional) - controls the non-linearity of the curve. If it is not provided, it's automatically estimated.

Regularization

On regularization module we have Dropout implemented in this version. It is based on the idea of randomly shutting down some neurons in order to prevent overfitting. It takes only two parameters, listed below. This list was originally available in PyTorch documentation page.

  • p – probability of an element to be zeroed. Default: 0.5

  • inplace – If set to True, will do this operation in-place. Default: False

6. License

This software is licensed under the GNU General Public License v3.0. See the LICENSE file for details.

You might also like...
 Improving Deep Network Debuggability via Sparse Decision Layers
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Accelerate Neural Net Training by Progressively Freezing Layers
Accelerate Neural Net Training by Progressively Freezing Layers

FreezeOut A simple technique to accelerate neural net training by progressively freezing layers. This repository contains code for the extended abstra

a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Pytorch code for paper
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

This is a model made out of Neural Network specifically a Convolutional Neural Network model
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Comments
  • how would you add bias?

    how would you add bias?

    the readme mentions you have a few ideas on how to do so, curious what they are. lack of bias seems to hurt performance, based on the conformallayers paper

    opened by alok 1
  • Typo when importing progress_bar in Experiment 1

    Typo when importing progress_bar in Experiment 1

    Upon trying to run Experiment I, the following error message appears.

    $ python BaseReSProNet.py --dataset=FashionMNIST
    Traceback (most recent call last):
      File "BaseReSProNet.py", line 11, in <module>
        from Experiments.utils.utils import progress_bar
    ModuleNotFoundError: No module named 'Experiments.utils.utils'
    

    I believe it is a typo; according to the source code, it should be

    from Experiments.utils.utils import progress_bar

    Environment:

    • Running the Minkowski Engine Docker built with https://github.com/NVIDIA/MinkowskiEngine/blob/master/docker/Dockerfile
    • Ubuntu 20.04
    bug 
    opened by wilderlopes 1
  • Missing setup.py

    Missing setup.py

    Hi everybody,

    Congrats on the paper! I am looking forward to reproducing it. However, I noticed the setup.py (used to install your library according to the documentation) is missing from the repo. Is there any other we could install/run it?

    enhancement 
    opened by wilderlopes 1
Releases(v1.2.1)
Owner
Prograf-UFF
Graphics Processing Research Laboratory
Prograf-UFF
Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)

This repository contains code to reproduce results for submission NeurIPS 2021, "Momentum Centering and Asynchronous Update for Adaptive Gradient Meth

Juntang Zhuang 15 Jun 11, 2022
Contextualized Perturbation for Textual Adversarial Attack, NAACL 2021

Contextualized Perturbation for Textual Adversarial Attack Introduction This is a PyTorch implementation of Contextualized Perturbation for Textual Ad

cookielee77 30 Jan 01, 2023
Automatic deep learning for image classification.

AutoDL AutoDL automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few line

wenqi 2 Oct 12, 2022
Machine Learning Toolkit for Kubernetes

Kubeflow the cloud-native platform for machine learning operations - pipelines, training and deployment. Documentation Please refer to the official do

Kubeflow 12.1k Jan 03, 2023
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

80 Dec 27, 2022
Codes for 'Dual Parameterization of Sparse Variational Gaussian Processes'

Dual Parameterization of Sparse Variational Gaussian Processes Documentation | Notebooks | API reference Introduction This repository is the official

AaltoML 7 Dec 23, 2022
SAS: Self-Augmentation Strategy for Language Model Pre-training

SAS: Self-Augmentation Strategy for Language Model Pre-training This repository

Alibaba 5 Nov 02, 2022
PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"

REMIND Your Neural Network to Prevent Catastrophic Forgetting This is a PyTorch implementation of the REMIND algorithm from our ECCV-2020 paper. An ar

Tyler Hayes 72 Nov 27, 2022
Text Extraction Formulation + Feedback Loop for state-of-the-art WSD (EMNLP 2021)

ConSeC is a novel approach to Word Sense Disambiguation (WSD), accepted at EMNLP 2021. It frames WSD as a text extraction task and features a feedback loop strategy that allows the disambiguation of

Sapienza NLP group 36 Dec 13, 2022
VGGFace2-HQ - A high resolution face dataset for face editing purpose

The first open source high resolution dataset for face swapping!!! A high resolution version of VGGFace2 for academic face editing purpose

Naiyuan Liu 232 Dec 29, 2022
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 101 Jan 02, 2023
Pytorch Implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension)

DiffSinger - PyTorch Implementation PyTorch implementation of DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis (TTS Extension). Status

Keon Lee 152 Jan 02, 2023
Angle data is a simple data type.

angledat Angle data is a simple data type. Installing + using Put angledat.py in the main dir of your project. Import it and use. Comments Comments st

1 Jan 05, 2022
A pytorch implementation of Paper "Improved Training of Wasserstein GANs"

WGAN-GP An pytorch implementation of Paper "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, SciPy, Matplotlib A recent NVIDIA GPU

Marvin Cao 1.4k Dec 14, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL: Graph Contrastive Learning for PyTorch PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL com

GCL: Graph Contrastive Learning Library for PyTorch 594 Jan 08, 2023
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
Implementation of Hierarchical Transformer Memory (HTM) for Pytorch

Hierarchical Transformer Memory (HTM) - Pytorch Implementation of Hierarchical Transformer Memory (HTM) for Pytorch. This Deepmind paper proposes a si

Phil Wang 63 Dec 29, 2022
Implementation of Stochastic Image-to-Video Synthesis using cINNs.

Stochastic Image-to-Video Synthesis using cINNs Official PyTorch implementation of Stochastic Image-to-Video Synthesis using cINNs accepted to CVPR202

CompVis Heidelberg 135 Dec 28, 2022
[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.

[ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning This is the Tensorflow implementation of ICLR 2021 paper Rank the Episo

Daochen Zha 48 Nov 21, 2022
Barbershop: GAN-based Image Compositing using Segmentation Masks (SIGGRAPH Asia 2021)

Barbershop: GAN-based Image Compositing using Segmentation Masks Barbershop: GAN-based Image Compositing using Segmentation Masks Peihao Zhu, Rameen A

Peihao Zhu 928 Dec 30, 2022