AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

Overview

AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

AlgoVision

This repository includes the official implementation of our NeurIPS 2021 Paper "Learning with Algorithmic Supervision via Continuous Relaxations" (Paper @ ArXiv, Video @ Youtube).

algovision is a Python 3.6+ and PyTorch 1.9.0+ based library for making algorithms differentiable. It can be installed via:

pip install algovision

Applications include smoothly integrating algorithms into neural networks for algorithmic supervision, problem-specific optimization within an algorithm, and whatever your imagination allows. As algovision relies on PyTorch it also supports CUDA, etc.

Check out the Documentation!

🌱 Intro

Deriving a loss from a smooth algorithm can be as easy as

from examples import get_bubble_sort
import torch

# Get an array (the first dimension is the batch dimension, which is always required)
array = torch.randn(1, 8, requires_grad=True)

bubble_sort = get_bubble_sort(beta=5)
result, loss = bubble_sort(array)

loss.backward()
print(array)
print(result)
print(array.grad)

Here, the loss is a sorting loss corresponding to the number of swaps in the bubble sort algorithm. But we can also define this algorithm from scratch:

from algovision import (
    Algorithm, Input, Output, Var, VarInt,                                          # core
    Let, LetInt, Print,                                                     # instructions
    Eq, NEq, LT, LEq, GT, GEq, CatProbEq, CosineSimilarity, IsTrue, IsFalse,  # conditions
    If, While, For,                                                   # control_structures
    Min, ArgMin, Max, ArgMax,                                                  # functions
)
import torch

bubble_sort = Algorithm(
    # Define the variables the input corresponds to
    Input('array'),
    # Declare and initialize all differentiable variables 
    Var('a',        torch.tensor(0.)),
    Var('b',        torch.tensor(0.)),
    Var('swapped',  torch.tensor(1.)),
    Var('loss',     torch.tensor(0.)),
    # Declare and initialize a hard integer variable (VarInt) for the control flow.
    # It can be defined in terms of a lambda expression. The required variables
    # are automatically inferred from the signature of the lambda expression.
    VarInt('n', lambda array: array.shape[1] - 1),
    # Start a relaxed While loop:
    While(IsTrue('swapped'),
        # Set `swapped` to 0 / False
        Let('swapped', 0),
        # Start an unrolled For loop. Corresponds to `for i in range(n):`
        For('i', 'n',
            # Set `a` to the `i`th element of `array`
            Let('a', 'array', ['i']),
            # Using an inplace lambda expression, we can include computations 
            # based on variables to obtain the element at position i+1. 
            Let('b', 'array', [lambda i: i+1]),
            # An If-Else statement with the condition a > b
            If(GT('a', 'b'),
               if_true=[
                   # Set the i+1 th element of array to a
                   Let('array', [lambda i: i + 1], 'a'),
                   # Set the i th element of array to b
                   Let('array', ['i'], 'b'),
                   # Set swapped to 1 / True
                   Let('swapped', 1.),
                   # Increment the loss by 1 using a lambda expression
                   Let('loss', lambda loss: loss + 1.),
               ]
           ),
        ),
        # Decrement the hard integer variable n by 1
        LetInt('n', lambda n: n-1),
    ),
    # Define what the algorithm should return
    Output('array'),
    Output('loss'),
    # Set the inverse temperature beta
    beta=5,
)

👾 Full Instruction Set

(click to expand)

The full set of modules is:

from algovision import (
    Algorithm, Input, Output, Var, VarInt,                                          # core
    Let, LetInt, Print,                                                     # instructions
    Eq, NEq, LT, LEq, GT, GEq, CatProbEq, CosineSimilarity, IsTrue, IsFalse,  # conditions
    If, While, For,                                                   # control_structures
    Min, ArgMin, Max, ArgMax,                                                  # functions
)

Algorithm is the main class, Input and Output define arguments and return values, Var defines differentiable variables and VarInt defines non-differentiable integer variables. Eq, LT, etc. are relaxed conditions for If and While, which are respective control structures. For bounded loops of fixed length that are unrolled. Let sets a differentiable variable, LetInt sets a hard integer variable. Note that hard integer variables should only be used if they are independent of the input values, but they may depend on the input shape (e.g., for reducing the number of iterations after each traversal of a For loop). Print prints for debug purposes. Min, ArgMin, Max, and ArgMax return the element-wise min/max/argmin/argmax of a list of tensors (of equal shape).

λ Lambda Expressions

Key to defining an algorithm are lambda expressions (see here for a reference). They allow defining anonymous functions and therefore allow expressing computations in-place. In most cases in algovision, it is possible to write a value in terms of a lambda expressions. The name of the used variable will be inferred from the signature of the expression. For example, lambda x: x**2 will take the variable named x and return the square of it at the location where the expression is written.

Let('z', lambda x, y: x**2 + y) corresponds to the regular line of code z = x**2 + y. This also allows inserting complex external functions including neural networks as part of the lambda expression. Assuming net is a neural networks, one can write Let('y', lambda x: net(x)) (corresponding to y = net(x)).

Let

Let is a very flexible instruction. The following table shows the use cases of it.

AlgoVision Python Description
Let('a', 'x') a = x Variable a is set to the value of variable x.
Let('a', lambda x: x**2) a = x**2 As soon as we compute anything on the right hand side of the equation, we need to write it as a lambda expression.
Let('a', 'array', ['i']) a = array[i] Indexing on the right hand requires an additional list parameter after the second argument.
Let('a', lambda array, i: array[:, i]) a = array[i] Equivalent to the row above: indexing can also be manually done inside of a lambda expression. Note that in this case, the batch dimension has to be written explicitly.
Let('a', 'array', ['i', lambda j: j+1]) a = array[i, j+1] Multiple indices and lambda expressions are also supported.
Let('a', 'array', [None, slice(0, None, 2)]) a = array[:, 0::2] None and slices are also supported.
Let('a', ['i'], 'x') a[i] = x Indexing can also be done on the left hand side of the equation.
Let('a', ['i'], 'x', ['j']) a[i] = x['j'] ...or on both sides.
Let(['a', 'b'], lamba x, y: (x+y, x-y)) a, b = x+y, x-y Multiple return values are supported.

In its most simple form Let obtains two arguments, a string naming the variable where the result is written, and the value that may be expressed via a lambda expression.

If the lambda expression returns multiple values, e.g., because a complex function is called and has two return values, the left argument can be a list of strings. That is, Let(['a', 'b'], lamba x, y: (x+y, x-y)) corresponds to a, b = x+y, x-y.

Let also supports indexing. This is denoted by an additional list argument after the left and/or the right argument. For example, Let('a', 'array', ['i']) corresponds to a = array[i], while Let('array', ['i'], 'b') corresponds to array[i] = b. Let('array', ['i'], 'array', ['j']) corresponding to array[i] = array[j] is also supported.

Note that indexing can also be expressed through lambda expressions. For example, Let('a', 'array', ['i']) is equivalent to Let('a', lambda array, i: array[:, i]). Note how in this case the batch dimension has to be explicitly taken into account ([:, ]). Relaxed indexing on the right-hand side is only supported through lambda expressions due to its complexity. Relaxed indexing on the left-hand side is supported if exactly one probability weight tensor is in the list (e.g., Let('array', [lambda x: get_weights(x)], 'a')).

LetInt only supports setting the variable to an integer (Python int) or list of integers (as well as the same type via lambda expressions). Note that hard integer variables should only be used if they are independent of the input values, but they may depend on the input shape.

If you need help implementing your differentiable algorithm, you may schedule an appointment. This will also help me improve the documentation and usability.

🧪 Experiments

The experiments can be found in the experiments folder. Additional experiments will be added soon.

🔬 Sorting Supervision

The sorting supervision experiment can be run with

python experiments/train_sort.py

or by checking out this Colab notebook.

📖 Citing

If you used our library, please cite it as

@inproceedings{petersen2021learning,
  title={{Learning with Algorithmic Supervision via Continuous Relaxations}},
  author={Petersen, Felix and Borgelt, Christian and Kuehne, Hilde and Deussen, Oliver},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
  year={2021}
}

📜 License

algovision is released under the MIT license. See LICENSE for additional details.

Owner
Felix Petersen
Researcher @ University of Konstanz
Felix Petersen
Object detection evaluation metrics using Python.

Object detection evaluation metrics using Python.

Louis Facun 2 Sep 06, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
[CIKM 2019] Code and dataset for "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"

FiGNN for CTR prediction The code and data for our paper in CIKM2019: Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Predicti

Big Data and Multi-modal Computing Group, CRIPAC 75 Dec 30, 2022
Single-Stage 6D Object Pose Estimation, CVPR 2020

Overview This repository contains the code for the paper Single-Stage 6D Object Pose Estimation. Yinlin Hu, Pascal Fua, Wei Wang and Mathieu Salzmann.

CVLAB @ EPFL 89 Dec 26, 2022
GEA - Code for Guided Evolution for Neural Architecture Search

Efficient Guided Evolution for Neural Architecture Search Usage Create a conda e

6 Jan 03, 2023
Bayesian optimization in PyTorch

BoTorch is a library for Bayesian Optimization built on PyTorch. BoTorch is currently in beta and under active development! Why BoTorch ? BoTorch Prov

2.5k Dec 31, 2022
Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous Event-Based Data"

A Differentiable Recurrent Surface for Asynchronous Event-Based Data Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous

Marco Cannici 21 Oct 05, 2022
A PyTorch implementation of SIN: Superpixel Interpolation Network

SIN: Superpixel Interpolation Network This is is a PyTorch implementation of the superpixel segmentation network introduced in our PRICAI-2021 paper:

6 Sep 28, 2022
Implementation of paper "DeepTag: A General Framework for Fiducial Marker Design and Detection"

Implementation of paper DeepTag: A General Framework for Fiducial Marker Design and Detection. Project page: https://herohuyongtao.github.io/research/

Yongtao Hu 46 Dec 12, 2022
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

52 Nov 20, 2022
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021
Python scripts form performing stereo depth estimation using the HITNET model in ONNX.

ONNX-HITNET-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in ONNX. Stereo depth estimation on

Ibai Gorordo 30 Nov 08, 2022
Code for "Unsupervised State Representation Learning in Atari"

Unsupervised State Representation Learning in Atari Ankesh Anand*, Evan Racah*, Sherjil Ozair*, Yoshua Bengio, Marc-Alexandre Côté, R Devon Hjelm This

Mila 217 Jan 03, 2023
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
Robust Self-augmentation for NER with Meta-reweighting

Robust Self-augmentation for NER with Meta-reweighting

Lam chi 17 Nov 22, 2022
Code of Periodic Activation Functions Induce Stationarity

Periodic Activation Functions Induce Stationarity This repository is the official implementation of the methods in the publication: L. Meronen, M. Tra

AaltoML 12 Jun 07, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

386 Jan 01, 2023
This repository is for the preprint "A generative nonparametric Bayesian model for whole genomes"

BEAR Overview This repository contains code associated with the preprint A generative nonparametric Bayesian model for whole genomes (2021), which pro

Debora Marks Lab 10 Sep 18, 2022
Tesla Light Show xLights Guide With python

Tesla Light Show xLights Guide Welcome to the Tesla Light Show xLights guide! You can create and run your own light shows on Tesla vehicles. Running a

Tesla, Inc. 2.5k Dec 29, 2022
Fast convergence of detr with spatially modulated co-attention

Fast convergence of detr with spatially modulated co-attention Usage There are no extra compiled components in SMCA DETR and package dependencies are

peng gao 135 Dec 07, 2022