Code Repository for Liquid Time-Constant Networks (LTCs)

Overview

Liquid time-constant Networks (LTCs)

[Update] A Pytorch version is added in our sister repository: https://github.com/mlech26l/keras-ncp

This is the official repository for LTC networks described in paper: https://arxiv.org/abs/2006.04439 This repository alows you to train continuous-time models with backpropagation through-time (BPTT). Available Continuous-time models are:

Models References
Liquid time-constant Networks https://arxiv.org/abs/2006.04439
Neural ODEs https://papers.nips.cc/paper/7892-neural-ordinary-differential-equations.pdf
Continuous-time RNNs https://www.sciencedirect.com/science/article/abs/pii/S089360800580125X
Continuous-time Gated Recurrent Units (GRU) https://arxiv.org/abs/1710.04110

Requisites

All models were implemented tested with TensorFlow 1.14.0 and python3 on Ubuntu 16.04 and 18.04 machines. All following steps assume that they are executed under these conditions.

Preparation

First we have to download all datasets by running

source download_datasets.sh

This script creates a folder data, where all downloaded datasets are stored.

Training and evaluating the models

There is exactly one python module per dataset:

  • Hand gesture segmentation: gesture.py
  • Room occupancy detection: occupancy.py
  • Human activity recognition: har.py
  • Traffic volume prediction: traffic.py
  • Ozone level forecasting: ozone.py

Each script accepts the following four agruments:

  • --model: lstm | ctrnn | ltc | ltc_rk | ltc_ex
  • --epochs: number of training epochs (default 200)
  • --size: number of hidden RNN units (default 32)
  • --log: interval of how often to evaluate validation metric (default 1)

Each script trains the specified model for the given number of epochs and evalutates the validation performance after every log steps. At the end of training, the best performing checkpoint is restored and the model is evaluated on the test set. All results are stored in the results folder by appending the result to CSV-file.

For example, we can train and evaluate the CT-RNN by executing

python3 har.py --model ctrnn

After the script is finished there should be a file results/har/ctrnn_32.csv created, containing the following columns:

  • best epoch: Epoch number that achieved the best validation metric
  • train loss: Training loss achieved at the best epoch
  • train accuracy: Training metric achieved at the best epoch
  • valid loss: Validation loss achieved at the best epoch
  • valid accuracy: Best validation metric achieved during training
  • test loss: Loss on the test set
  • test accuracy: Metric on the test set

Hyperparameters

Parameter Value Description
Minibatch size 16 Number of training samples over which the gradient descent update is computed
Learning rate 0.001/0.02 0.01-0.02 for LTC, 0.001 for all other models.
Hidden units 32 Number of hidden units of each model
Optimizer Adam See (Kingma and Ba, 2014)
beta_1 0.9 Parameter of the Adam method
beta_2 0.999 Parameter of the Adam method
epsilon 1e-08 Epsilon-hat parameter of the Adam method
Number of epochs 200 Maximum number of training epochs
BPTT length 32 Backpropagation through time length in time-steps
ODE solver sreps 1/6 relative to input sampling period
Validation evaluation interval 1 Interval of training epochs when the metrics on the validation are evaluated

Trajectory Length Analysis

Run the main.m file to get trajectory length results for the desired setting tuneable in the code.

Owner
Ramin Hasani
deep learning
Ramin Hasani
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations)

Graph Neural Networks with Learnable Structural and Positional Representations Source code for the paper "Graph Neural Networks with Learnable Structu

Vijay Prakash Dwivedi 180 Dec 22, 2022
ElasticFace: Elastic Margin Loss for Deep Face Recognition

This is the official repository of the paper: ElasticFace: Elastic Margin Loss for Deep Face Recognition Paper on arxiv: arxiv Model Log file Pretrain

Fadi Boutros 113 Dec 14, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

Creating Robust Representations from Pre-Trained Image Encoders using Contrastive Learning Sriram Ravula, Georgios Smyrnis This is the code for our pr

Sriram Ravula 26 Dec 10, 2022
Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect"

Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect" by Michael Ne

M Nestor 1 Apr 19, 2022
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.

WSDEC This is the official repo for our NeurIPS paper Weakly Supervised Dense Event Captioning in Videos. Description Repo directories ./: global conf

Melon(Xuguang Duan) 96 Nov 01, 2022
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging.

SweiNet SweiNet is an uncertainty-quantifying shear wave speed (SWS) estimator for ultrasound shear wave elasticity (SWE) imaging. SweiNet takes as in

Felix Jin 3 Mar 31, 2022
A DCGAN to generate anime faces using custom mined dataset

Anime-Face-GAN-Keras A DCGAN to generate anime faces using custom dataset in Keras. Dataset The dataset is created by crawling anime database websites

Pavitrakumar P 190 Jan 03, 2023
AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

News 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1. AttGAN TIP Nov. 2019, arXiv Nov. 2017 TensorFlow impleme

Zhenliang He 568 Dec 14, 2022
The "breathing k-means" algorithm with datasets and example notebooks

The Breathing K-Means Algorithm (with examples) The Breathing K-Means is an approximation algorithm for the k-means problem that (on average) is bette

Bernd Fritzke 75 Nov 17, 2022
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis Andreas Bl

CompVis Heidelberg 36 Dec 25, 2022
The spiritual successor to knockknock for PyTorch Lightning, get notified when your training ends

Who's there? The spiritual successor to knockknock for PyTorch Lightning, to get a notification when your training is complete or when it crashes duri

twsl 70 Oct 06, 2022
Simulation of self-focusing of laser beams in condensed media

What is it? Program for scientific research, which allows to simulate the phenomenon of self-focusing of different laser beams (including Gaussian, ri

Evgeny Vasilyev 13 Dec 24, 2022
Source code for ZePHyR: Zero-shot Pose Hypothesis Rating @ ICRA 2021

ZePHyR: Zero-shot Pose Hypothesis Rating ZePHyR is a zero-shot 6D object pose estimation pipeline. The core is a learned scoring function that compare

R-Pad - Robots Perceiving and Doing 18 Aug 22, 2022
scAR (single-cell Ambient Remover) is a package for data denoising in single-cell omics.

scAR scAR (single cell Ambient Remover) is a package for denoising multiple single cell omics data. It can be used for multiple tasks, such as, sgRNA

19 Nov 28, 2022
Official PyTorch implementation of MAAD: A Model and Dataset for Attended Awareness

MAAD: A Model for Attended Awareness in Driving Install // Datasets // Training // Experiments // Analysis // License Official PyTorch implementation

7 Oct 16, 2022
Official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION.

IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION This is the official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSU

电线杆 14 Dec 15, 2022
Code release of paper "Deep Multi-View Stereo gone wild"

Deep MVS gone wild Pytorch implementation of "Deep MVS gone wild" (Paper | website) This repository provides the code to reproduce the experiments of

François Darmon 53 Dec 24, 2022