Tutorial for surrogate gradient learning in spiking neural networks

Overview

SpyTorch

A tutorial on surrogate gradient learning in spiking neural networks

Version: 0.4

DOI

This repository contains tutorial files to get you started with the basic ideas of surrogate gradient learning in spiking neural networks using PyTorch.

You find a brief introductory video accompanying these notebooks here https://youtu.be/xPYiAjceAqU

Feedback and contributions are welcome.

For more information on surrogate gradient learning please refer to:

Neftci, E.O., Mostafa, H., and Zenke, F. (2019). Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine 36, 51–63. https://ieeexplore.ieee.org/document/8891809 preprint: https://arxiv.org/abs/1901.09948

Also see https://github.com/surrogate-gradient-learning

Copyright and license

Copyright 2019-2020 Friedemann Zenke, https://fzenke.net

This work is licensed under a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/

Comments
  • resetting with

    resetting with "out" instead of "rst"?

    • This is a comment, not an issue *

    Hi Friedemann, First of thanks a lot for these great tutorials, I've enjoyed a lot playing with them, and I've learned a lot :-) One question: in the run_snn function, why do you bother constructing the "rst" tensor? Why don't you subtract the "out" tensor, which also contains the output spikes? I've tried, and it seems to work. Just curious. Best,

    Tim

    question 
    opened by tmasquelier 8
  • Problem in SpyTorchTutorial2

    Problem in SpyTorchTutorial2

    Hello,

    It was a very nice and interesting tutorial, thank you for preparing it...

    tutorial1 haven't any problem, but in tutorial 2, some dtype problems occurred... after their fixation, training process was very slow on GTX 980 (I've run on this config some very deep model)... could you please explain your config, and also training time and response time?

    opened by ghost 6
  • Spike times shifted

    Spike times shifted

    I have the impression that the spike recordings are shifted one time step in all tutorials. Could you maybe check if this is indeed the case?

    From my understanding, time step 0 is recorded twice for the spikes, once during initialisation

      mem = torch.zeros((batch_size, nb_hidden), device=device, dtype=dtype)
      spk_rec = [mem]
    

    and once within the simulation of time step 0:

      for t in range(nb_steps):
          mthr = mem-1.0
          out = spike_fn(mthr)
          ...
          spk_rec.append(out)
    

    As a result the indeces appear shifted when comparing

    print(torch.nonzero((mem_rec-1.0) > 0.0))
    print(torch.nonzero(spk_rec))
    

    Thanks, Simon

    opened by smonsays 4
  • Software/Machine description available?

    Software/Machine description available?

    Hey Friedemann,

    thanks for making the examples available, they look very helpful. However, to make them fully reproducible I think that some additional information regarding the "technical dependencies" is needed.

    In particular, the list of used software packages (incl. version and build variant information) plus some specification about the machine hardware (CPU arch, GPUs).

    Preferably, the former could be expressed as a recipe for constructing a container (Dockerfile, or for better HPC-compatibility, a Singularity recipe), maybe even using an explicitly versioning package manager like spack.

    Cheers, Eric

    opened by muffgaga 3
  • Dataset never decompressed

    Dataset never decompressed

    Hello,

    I belive I ran into a possible issue here. Due to line 37 the evaluation in line 38 will always be false if one hasnt already got the uncompressed dataset.

    https://github.com/fzenke/spytorch/blob/9e91eceaf53f17be9e95a3743164224bdbb086bb/notebooks/utils.py#L35-L42

    If I change line 37 to: hdf5_file_path = gz_file_path[:-3] This works for me.

    Best, Aaron

    opened by AaronSpieler 1
  • propagation delay

    propagation delay

    Hi zenke, I have a question about the snn model. If I feed a spike image to a snn with L layers at time step n, the output of the last layer will be affected by the input at time step n + L - 1. In deep networks, the delay should be considered, because it will increase the whole time steps. Screen Shot 2021-12-15 at 4 50 45 PM

    opened by yizx6 1
  • Compute recurrent contribution from spikes

    Compute recurrent contribution from spikes

    Hey Friedemann,

    thank you for the very comprehensive tutorial! I have a question on the way the recurrence is computed in tutorial 4. If I understand the equation for the dynamics of the current correctly, the recurrence should be computed with the spiking neuron state:

    mthr = mem-1.0
    out = spike_fn(mthr)
    h1 = h1_from_input[:,t] + torch.einsum("ab,bc->ac", (out, v1))
    

    Instead in tutorial 4, a separate hidden state is kept, that ignores the spike function:

    h1 = h1_from_input[:,t] + torch.einsum("ab,bc->ac", (h1, v1))
    

    Is this done deliberately? Judging from simulating a few epochs, the two versions seem to perform similarly.

    Thank you,

    Simon

    opened by smonsays 1
  • maybe simplification

    maybe simplification

    I don't understand why the 'rst' variable exists. It seems to always be == 'out'. Changing to rst = out yields same results...

    def spike_fn(x):
        out = torch.zeros_like(x)
        out[x > 0] = 1.0
        return out
    ...
    # Here we loop over time
    for t in range(nb_steps):
        mthr = mem-1.0
        out = spike_fn(mthr) 
        rst = torch.zeros_like(mem)
        c = (mthr > 0)
        rst[c] = torch.ones_like(mem)[c] 
    
    opened by colinator 1
  • Issue in running Tutorial-4

    Issue in running Tutorial-4

    When I am running the following piece of code in Tutorial-4:

    loss_hist = train(x_train, y_train, lr=2e-4, nb_epochs=nb_epochs)

    I am getting the following error: pic3

    Can you please suggest me how to resolve this issue?

    opened by paglabhola 0
Releases(v0.3)
Owner
Friedemann Zenke
Friedemann Zenke
TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards

TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards. It can reduce GPU memory and scale up the training when the model has massive linear layers (e.g., ViT, BERT and

Kaiyu Yue 275 Nov 22, 2022
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Tensor Sensor See article Clarifying exceptions and visualizing tensor operations in deep learning code. One of the biggest challenges when writing co

Terence Parr 704 Dec 14, 2022
torch-optimizer -- collection of optimizers for Pytorch

torch-optimizer torch-optimizer -- collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim

Nikolay Novik 2.6k Jan 03, 2023
A very simple and small path tracer written in pytorch meant to be run on the GPU

MentisOculi Pytorch Path Tracer A very simple and small path tracer written in pytorch meant to be run on the GPU Why use pytorch and not some other c

Matthew B. Mirman 222 Dec 01, 2022
Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking"

model_based_energy_constrained_compression Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and

Haichuan Yang 16 Jun 15, 2022
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

Kim Seonghyeon 433 Dec 27, 2022
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Fidelity Investments 56 Sep 13, 2022
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Jan 06, 2023
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

878 Dec 30, 2022
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Jan 07, 2023
Bunch of optimizer implementations in PyTorch

Bunch of optimizer implementations in PyTorch

Hyeongchan Kim 76 Jan 03, 2023
A PyTorch implementation of EfficientNet

EfficientNet PyTorch Quickstart Install with pip install efficientnet_pytorch and load a pretrained EfficientNet with: from efficientnet_pytorch impor

Luke Melas-Kyriazi 7.2k Jan 06, 2023
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 05, 2023
PyTorch to TensorFlow Lite converter

PyTorch to TensorFlow Lite converter

Omer Ferhat Sarioglu 140 Dec 13, 2022
PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

DreamQuark 2k Dec 27, 2022
Distiller is an open-source Python package for neural network compression research.

Wiki and tutorials | Documentation | Getting Started | Algorithms | Design | FAQ Distiller is an open-source Python package for neural network compres

Intel Labs 4.1k Dec 28, 2022
270 Dec 24, 2022