sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

Overview

sequitur

sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three different autoencoder architectures in PyTorch, and a predefined training loop. sequitur is ideal for working with sequential data ranging from single and multivariate time series to videos, and is geared for those who want to get started quickly with autoencoders.

import torch
from sequitur.models import LINEAR_AE
from sequitur import quick_train

train_seqs = [torch.randn(4) for _ in range(100)] # 100 sequences of length 4
encoder, decoder, _, _ = quick_train(LINEAR_AE, train_seqs, encoding_dim=2, denoise=True)

encoder(torch.randn(4)) # => torch.tensor([0.19, 0.84])

Each autoencoder learns to represent input sequences as lower-dimensional, fixed-size vectors. This can be useful for finding patterns among sequences, clustering sequences, or converting sequences into inputs for other algorithms.

Installation

Requires Python 3.X and PyTorch 1.2.X

You can install sequitur with pip:

$ pip install sequitur

Getting Started

1. Prepare your data

First, you need to prepare a set of example sequences to train an autoencoder on. This training set should be a list of torch.Tensors, where each tensor has shape [num_elements, *num_features]. So, if each example in your training set is a sequence of 10 5x5 matrices, then each example would be a tensor with shape [10, 5, 5].

2. Choose an autoencoder

Next, you need to choose an autoencoder model. If you're working with sequences of numbers (e.g. time series) or 1D vectors (e.g. word vectors), then you should use the LINEAR_AE or LSTM_AE model. For sequences of 2D matrices (e.g. videos) or 3D matrices (e.g. fMRI scans), you'll want to use CONV_LSTM_AE. Each model is a PyTorch module, and can be imported like so:

from sequitur.models import CONV_LSTM_AE

More details about each model are in the "Models" section below.

3. Train the autoencoder

From here, you can either initialize the model yourself and write your own training loop, or import the quick_train function and plug in the model, training set, and desired encoding size, like so:

import torch
from sequitur.models import CONV_LSTM_AE
from sequitur import quick_train

train_set = [torch.randn(10, 5, 5) for _ in range(100)]
encoder, decoder, _, _ = quick_train(CONV_LSTM_AE, train_set, encoding_dim=4)

After training, quick_train returns the encoder and decoder models, which are PyTorch modules that can encode and decode new sequences. These can be used like so:

x = torch.randn(10, 5, 5)
z = encoder(x) # Tensor with shape [4]
x_prime = decoder(z) # Tensor with shape [10, 5, 5]

API

Training your Model

quick_train(model, train_set, encoding_dim, verbose=False, lr=1e-3, epochs=50, denoise=False, **kwargs)

Lets you train an autoencoder with just one line of code. Useful if you don't want to create your own training loop. Training involves learning a vector encoding of each input sequence, reconstructing the original sequence from the encoding, and calculating the loss (mean-squared error) between the reconstructed input and the original input. The autoencoder weights are updated using the Adam optimizer.

Parameters:

  • model (torch.nn.Module): Autoencoder model to train (imported from sequitur.models)
  • train_set (list): List of sequences (each a torch.Tensor) to train the model on; has shape [num_examples, seq_len, *num_features]
  • encoding_dim (int): Desired size of the vector encoding
  • verbose (bool, optional (default=False)): Whether or not to print the loss at each epoch
  • lr (float, optional (default=1e-3)): Learning rate
  • epochs (int, optional (default=50)): Number of epochs to train for
  • **kwargs: Parameters to pass into model when it's instantiated

Returns:

  • encoder (torch.nn.Module): Trained encoder model; takes a sequence (as a tensor) as input and returns an encoding of the sequence as a tensor of shape [encoding_dim]
  • decoder (torch.nn.Module): Trained decoder model; takes an encoding (as a tensor) and returns a decoded sequence
  • encodings (list): List of tensors corresponding to the final vector encodings of each sequence in the training set
  • losses (list): List of average MSE values at each epoch

Models

Every autoencoder inherits from torch.nn.Module and has an encoder attribute and a decoder attribute, both of which also inherit from torch.nn.Module.

Sequences of Numbers

LINEAR_AE(input_dim, encoding_dim, h_dims=[], h_activ=torch.nn.Sigmoid(), out_activ=torch.nn.Tanh())

Consists of fully-connected layers stacked on top of each other. Can only be used if you're dealing with sequences of numbers, not vectors or matrices.

Parameters:

  • input_dim (int): Size of each input sequence
  • encoding_dim (int): Size of the vector encoding
  • h_dims (list, optional (default=[])): List of hidden layer sizes for the encoder
  • h_activ (torch.nn.Module or None, optional (default=torch.nn.Sigmoid())): Activation function to use for hidden layers; if None, no activation function is used
  • out_activ (torch.nn.Module or None, optional (default=torch.nn.Tanh())): Activation function to use for the output layer in the encoder; if None, no activation function is used

Example:

To create the autoencoder shown in the diagram above, use the following arguments:

from sequitur.models import LINEAR_AE

model = LINEAR_AE(
  input_dim=10,
  encoding_dim=4,
  h_dims=[8, 6],
  h_activ=None,
  out_activ=None
)

x = torch.randn(10) # Sequence of 10 numbers
z = model.encoder(x) # z.shape = [4]
x_prime = model.decoder(z) # x_prime.shape = [10]

Sequences of 1D Vectors

LSTM_AE(input_dim, encoding_dim, h_dims=[], h_activ=torch.nn.Sigmoid(), out_activ=torch.nn.Tanh())

Autoencoder for sequences of vectors which consists of stacked LSTMs. Can be trained on sequences of varying length.

Parameters:

  • input_dim (int): Size of each sequence element (vector)
  • encoding_dim (int): Size of the vector encoding
  • h_dims (list, optional (default=[])): List of hidden layer sizes for the encoder
  • h_activ (torch.nn.Module or None, optional (default=torch.nn.Sigmoid())): Activation function to use for hidden layers; if None, no activation function is used
  • out_activ (torch.nn.Module or None, optional (default=torch.nn.Tanh())): Activation function to use for the output layer in the encoder; if None, no activation function is used

Example:

To create the autoencoder shown in the diagram above, use the following arguments:

from sequitur.models import LSTM_AE

model = LSTM_AE(
  input_dim=3,
  encoding_dim=7,
  h_dims=[64],
  h_activ=None,
  out_activ=None
)

x = torch.randn(10, 3) # Sequence of 10 3D vectors
z = model.encoder(x) # z.shape = [7]
x_prime = model.decoder(z, seq_len=10) # x_prime.shape = [10, 3]

Sequences of 2D/3D Matrices

CONV_LSTM_AE(input_dims, encoding_dim, kernel, stride=1, h_conv_channels=[1], h_lstm_channels=[])

Autoencoder for sequences of 2D or 3D matrices/images, loosely based on the CNN-LSTM architecture described in Beyond Short Snippets: Deep Networks for Video Classification. Uses a CNN to create vector encodings of each image in an input sequence, and then an LSTM to create encodings of the sequence of vectors.

Parameters:

  • input_dims (tuple): Shape of each 2D or 3D image in the input sequences
  • encoding_dim (int): Size of the vector encoding
  • kernel (int or tuple): Size of the convolving kernel; use tuple to specify a different size for each dimension
  • stride (int or tuple, optional (default=1)): Stride of the convolution; use tuple to specify a different stride for each dimension
  • h_conv_channels (list, optional (default=[1])): List of hidden channel sizes for the convolutional layers
  • h_lstm_channels (list, optional (default=[])): List of hidden channel sizes for the LSTM layers

Example:

from sequitur.models import CONV_LSTM_AE

model = CONV_LSTM_AE(
  input_dims=(50, 100),
  encoding_dim=16,
  kernel=(5, 8),
  stride=(3, 5),
  h_conv_channels=[4, 8],
  h_lstm_channels=[32, 64]
)

x = torch.randn(22, 50, 100) # Sequence of 22 50x100 images
z = model.encoder(x) # z.shape = [16]
x_prime = model.decoder(z, seq_len=22) # x_prime.shape = [22, 50, 100]
Owner
Jonathan Shobrook
Jonathan Shobrook
NIMA: Neural IMage Assessment

PyTorch NIMA: Neural IMage Assessment PyTorch implementation of Neural IMage Assessment by Hossein Talebi and Peyman Milanfar. You can learn more from

Kyryl Truskovskyi 293 Dec 30, 2022
Library to enable Bayesian active learning in your research or labeling work.

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022
Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

Deep Deterministic Uncertainty This repository contains the code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic

Jishnu Mukhoti 69 Nov 28, 2022
Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019)

Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019) Introduction Official implementation of Adaptive Pyramid Context Network

21 Nov 09, 2022
Repo for parser tensorflow(.pb) and tflite(.tflite)

tfmodel_parser .pb file is the format of tensorflow model .tflite file is the format of tflite model, which usually used in mobile devices before star

1 Dec 23, 2021
Code for paper 'Hand-Object Contact Consistency Reasoning for Human Grasps Generation' at ICCV 2021

GraspTTA Hand-Object Contact Consistency Reasoning for Human Grasps Generation (ICCV 2021). Project Page with Videos Demo Quick Results Visualization

Hanwen Jiang 47 Dec 09, 2022
This repository contains Prior-RObust Bayesian Optimization (PROBO) as introduced in our paper "Accounting for Gaussian Process Imprecision in Bayesian Optimization"

Prior-RObust Bayesian Optimization (PROBO) Introduction, TOC This repository contains Prior-RObust Bayesian Optimization (PROBO) as introduced in our

Julian Rodemann 2 Mar 19, 2022
MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモ

Tokyo2020-Pictogram-using-MediaPipe MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモです。 Tokyo2020Pictgram02.mp4 Requirement mediapipe 0.8.6 or later O

KazuhitoTakahashi 295 Dec 26, 2022
This is the PyTorch implementation of GANs N’ Roses: Stable, Controllable, Diverse Image to Image Translation

Official PyTorch repo for GAN's N' Roses. Diverse im2im and vid2vid selfie to anime translation.

1.1k Jan 01, 2023
Code for project: "Learning to Minimize Remainder in Supervised Learning".

Learning to Minimize Remainder in Supervised Learning Code for project: "Learning to Minimize Remainder in Supervised Learning". Requirements and Envi

Yan Luo 0 Jul 18, 2021
Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Google 89 Dec 22, 2022
Official PyTorch Implementation of Hypercorrelation Squeeze for Few-Shot Segmentation, arXiv 2021

Hypercorrelation Squeeze for Few-Shot Segmentation This is the implementation of the paper "Hypercorrelation Squeeze for Few-Shot Segmentation" by Juh

Juhong Min 165 Dec 28, 2022
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Jay Sharma 1 Jan 05, 2022
Serving PyTorch 1.0 Models as a Web Server in C++

Serving PyTorch Models in C++ This repository contains various examples to perform inference using PyTorch C++ API. Run git clone https://github.com/W

Onur Kaplan 223 Jan 04, 2023
Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving

Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving This is the source code for our paper Frequency Domain Image Tran

Mu Cai 52 Dec 23, 2022
[TPAMI 2021] iOD: Incremental Object Detection via Meta-Learning

Incremental Object Detection via Meta-Learning To appear in an upcoming issue of the IEEE Transactions on Pattern Analysis and Machine Intelligence (T

Joseph K J 66 Jan 04, 2023
Simultaneous Demand Prediction and Planning

Simultaneous Demand Prediction and Planning Dependencies Python packages: Pytorch, scikit-learn, Pandas, Numpy, PyYAML Data POI: data/poi Road network

Yizong Wang 1 Sep 01, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
COIN the currently largest dataset for comprehensive instruction video analysis.

COIN Dataset COIN is the currently largest dataset for comprehensive instruction video analysis. It contains 11,827 videos of 180 different tasks (i.e

86 Dec 28, 2022