Autolfads-tf2 - A TensorFlow 2.0 implementation of Latent Factor Analysis via Dynamical Systems (LFADS) and AutoLFADS

Overview

autolfads-tf2

A TensorFlow 2.0 implementation of LFADS and AutoLFADS.

Installation

Clone the autolfads-tf2 repo and create and activate a conda environment with Python 3.7. Use conda to install cudatoolkit and cudnn and pip install the lfads_tf2 and tune_tf2 packages with the -e (editable) flag. This will allow you to import these packages anywhere when your environment is activated, while also allowing you to edit the code directly in the repo.

git clone [email protected]:snel-repo/autolfads-tf2.git
cd autolfads-tf2
conda create --name autolfads-tf2 python=3.7
conda activate autolfads-tf2
conda install -c conda-forge cudatoolkit=10.0
conda install -c conda-forge cudnn=7.6
pip install -e lfads-tf2
pip install -e tune-tf2

Usage

Training single models with lfads_tf2

The first step to training an LFADS model is setting the hyperparameter (HP) values. All HPs, their descriptions, and their default values are given in the defaults.py module. Note that these default values are unlikely to work well on your dataset. To overwrite any or all default values, the user must define new values in a YAML file (example in configs/lorenz.yaml).

The lfads_tf2.models.LFADS constructor takes as input the path to the configuration file that overwrites default HP values. The path to the modeled dataset is also specified in the config, so LFADS will load the dataset automatically.

The train function will execute the training loop until the validation loss converges or some other stopping criteria is reached. During training, the model will save various outputs in the folder specified by MODEL_DIR. Console outputs will be saved to train.log, metrics will be saved to train_data.csv, and checkpoints will be saved in lfads_ckpts.

After training, the sample_and_average function can be used to compute firing rate estimates and other intermediate model outputs and save them to posterior_samples.h5 in the MODEL_DIR.

We provide a simple example in example_scripts/train_lfads.py.

Training AutoLFADS models with tune_tf2

The autolfads-tf2 framework uses ray.tune to distribute models over a computing cluster, monitor model performance, and exploit high-performing models and their HPs.

Setting up a ray cluster

If you'll be running AutoLFADS on a single machine, you can skip this section. If you'll be running across multiple machines, you must initialize the cluster using these instructions before you can submit jobs via the Python API.

Fill in the fields indicated by <>'s in the ray_cluster_template.yaml, and save this file somewhere accessible. Ensure that a range of ports is open for communication on all machines that you intend to use (e.g. 10000-10099 in the template). In your autolfads-tf2 environment, start the cluster using ray up <NEW_CLUSTER_CONFIG>. The cluster may take up to a minute to get started. You can test that all machines are in the cluster by ensuring that all IP addresses are printed when running example_scripts/ray_test.py.

Starting an AutoLFADS run

To run AutoLFADS, copy the run_pbt.py script and adjust paths and hyperparameters to your needs. Make sure to only use only as many workers as can fit on the machine(s) at once. If you want to run across multiple machines, make sure to set SINGLE_MACHINE = False in run_pbt.py. To start your PBT run, simply run run_pbt.py. When the run is complete, the best model will be copied to a best_model folder in your PBT run folder. The model will automatically be sampled and averaged and all outputs will be saved to posterior_samples.h5.

References

Keshtkaran MR, Sedler AR, Chowdhury RH, Tandon R, Basrai D, Nguyen SL, Sohn H, Jazayeri M, Miller LE, Pandarinath C. A large-scale neural network training framework for generalized estimation of single-trial population dynamics. bioRxiv. 2021 Jan 1.

Keshtkaran MR, Pandarinath C. Enabling hyperparameter optimization in sequential autoencoders for spiking neural data. Advances in Neural Information Processing Systems. 2019; 32.

Comments
  • Update lfads-tf2 dependencies for Google Colab compatibility

    Update lfads-tf2 dependencies for Google Colab compatibility

    Summary of changes to setup.py

    • Change pandas==1.0.0 to pandas==1.* to avoid a dependency conflict with google-colab
    • Add PyYAML>=5.1 so that yaml.full_loadworks in lfads-tf2.
    opened by yahiaali 0
  • Are more recent versions of tensorflow/CUDA supported by the package?

    Are more recent versions of tensorflow/CUDA supported by the package?

    Right now the package supports TF 2.0 and CUDA 10.0 which are more than 3 years old. Is there support planned/already established for more recent Tensorflow and CUDA versions?

    Thanks!

    opened by stes 0
  • Error: No 'git' repo detected for 'lfads_tf2'

    Error: No 'git' repo detected for 'lfads_tf2'

    Hello, I am having this issue. I have followed all the installation instructions, and I was wondering why this issue would come up. autolfads-tf2 is cloned using git, and it is inside the git folder. But it seems like train_lfads.py is not loading data. I am using Window 10.

    error

    Thank you so much in advance!

    opened by jinoh5 0
  • Add warnings and assertion to chop functions for bad overlap

    Add warnings and assertion to chop functions for bad overlap

    Add warnings and assertion to chop functions when requested overlap is greater than half of window length

    Addresses https://github.com/snel-repo/autolfads-tf2/issues/2

    opened by raeedcho 0
  •  `merge_chops` is unable to merge when the requested overlap is more than half of the window length

    `merge_chops` is unable to merge when the requested overlap is more than half of the window length

    Without really thinking a whole lot about it, I chopped data to window length 100 and overlap 80, since this would leave at most 20 points of unmodeled data at the end of the trials I'm trying to model. The chopping seems to work totally fine, but when merging the chops together, it seems that the code assumes that the overlap will be at most half the size of the window, and the math to put the chops back together breaks down in weird ways, leading to duplicated data in the final array.

    On further thought, it makes sense to some degree to limit the overlap to be at most half of the window length, since otherwise, data from more than two chops would have to be integrated together to merge everything--if this is the thought process, I think it would be a good idea to put an assertion in both functions that this is the case (or maybe at least an assertion in the merge_chops function and a warning in the chop_data function, since chopping technically works fine).

    If instead it would make sense to be able to merge chops with overlap greater than half the window size, then I think the merge_chops function needs to be reworked to be able to integrate across more than two chops

    opened by raeedcho 0
Releases(v0.1)
Owner
Systems Neural Engineering Lab
Emory University and Georgia Institute of Technology
Systems Neural Engineering Lab
Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly! Efficient and Scalable Physics-Informed Deep Learning Collocation-

tensordiffeq 74 Dec 09, 2022
Real-time Joint Semantic Reasoning for Autonomous Driving

MultiNet MultiNet is able to jointly perform road segmentation, car detection and street classification. The model achieves real-time speed and state-

Marvin Teichmann 518 Dec 12, 2022
Image De-raining Using a Conditional Generative Adversarial Network

Image De-raining Using a Conditional Generative Adversarial Network [Paper Link] [Project Page] He Zhang, Vishwanath Sindagi, Vishal M. Patel In this

He Zhang 216 Dec 18, 2022
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

33 Jun 27, 2021
Demystifying How Self-Supervised Features Improve Training from Noisy Labels

Demystifying How Self-Supervised Features Improve Training from Noisy Labels This code is a PyTorch implementation of the paper "[Demystifying How Sel

<a href=[email protected]"> 4 Oct 14, 2022
A Flexible Generative Framework for Graph-based Semi-supervised Learning (NeurIPS 2019)

G3NN This repo provides a pytorch implementation for the 4 instantiations of the flexible generative framework as described in the following paper: A

Jiaqi Ma 14 Oct 11, 2022
👐OpenHands : Making Sign Language Recognition Accessible (WiP 🚧👷‍♂️🏗)

👐 OpenHands: Sign Language Recognition Library Making Sign Language Recognition Accessible Check the documentation on how to use the library: ReadThe

AI4Bhārat 69 Dec 12, 2022
Python package for visualizing the loss landscape of parameterized quantum algorithms.

orqviz A Python package for easily visualizing the loss landscape of Variational Quantum Algorithms by Zapata Computing Inc. orqviz provides a collect

Zapata Computing, Inc. 75 Dec 30, 2022
Official implementation of the NeurIPS 2021 paper Online Learning Of Neural Computations From Sparse Temporal Feedback

Online Learning Of Neural Computations From Sparse Temporal Feedback This repository is the official implementation of the NeurIPS 2021 paper Online L

Lukas Braun 3 Dec 15, 2021
Pytorch implement of 'Unmixing based PAN guided fusion network for hyperspectral imagery'

Pgnet There's a improved version compared with the publication in Tgrs with the modification in the deduction of the PDIN block: https://arxiv.org/abs

5 Jul 01, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)

Back to the Feature with PixLoc We introduce PixLoc, a neural network for end-to-end learning of camera localization from an image and a 3D model via

Computer Vision and Geometry Lab 610 Jan 05, 2023
Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates

Safe Control for Black-box Dynamical Systems via Neural Barrier Certificates Installation Clone the repository: git clone https://github.com/Zengyi-Qi

Zengyi Qin 3 Oct 18, 2022
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 04, 2022
NeuralForecast is a Python library for time series forecasting with deep learning models

NeuralForecast is a Python library for time series forecasting with deep learning models. It includes benchmark datasets, data-loading utilities, evaluation functions, statistical tests, univariate m

Nixtla 1.1k Jan 03, 2023
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
Official implementation of "Generating 3D Molecules for Target Protein Binding"

Generating 3D Molecules for Target Protein Binding This is the official implementation of the GraphBP method proposed in the following paper. Meng Liu

DIVE Lab, Texas A&M University 74 Dec 07, 2022
[CVPR'21] Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration This repository contains the implementation of our paper Locally Aware Pi

sfwang 70 Dec 19, 2022
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX Foolbox is a Python li

Bethge Lab 2.4k Dec 25, 2022