Code for the paper "Next Generation Reservoir Computing"

Overview

Next Generation Reservoir Computing

This is the code for the results and figures in our paper "Next Generation Reservoir Computing". They are written in Python, and require recent versions of NumPy, SciPy, and matplotlib. If you are using a Python environment like Anaconda, these are likely already installed.

Python Virtual Environment

If you are not using Anaconda, or want to run this code on the command line in vanilla Python, you can create a virtual environment with the required dependencies by running:

python3 -m venv env
./env/bin/pip install -r requirements.txt

This will install the most recent version of the requirements available to you. If you wish to use the exact versions we used, use requirements-exact.txt instead.

You can then run the individual scripts, for example:

./env/bin/python DoubleScrollNVAR-RK23.py
You might also like...
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

Code for our CVPR 2021 paper
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

Comments
  • Generalized Performance

    Generalized Performance

    I modified the code given in this repo to what I think is a more generalized version (below) where the input is an array containing points generated by any sort of process. It gives a perfect result on predicting sin functions, but on a constant linear trend gives absolutely terrible, nonsense performance. By my understanding, that is simply the nature of reservoir computing, it can't handle a trend. Is that correct?

    I would also appreciate any other insight you might have on the generalization of this function. Thanks!

    import numpy as np
    import pandas as pd
    
    
    def load_linear(long=False, shape=None, start_date: str = "2021-01-01"):
        """Create a dataset of just zeroes for testing edge case."""
        if shape is None:
            shape = (500, 5)
        df_wide = pd.DataFrame(
            np.ones(shape), index=pd.date_range(start_date, periods=shape[0], freq="D")
        )
        df_wide = (df_wide * list(range(0, shape[1]))).cumsum()
        if not long:
            return df_wide
        else:
            df_wide.index.name = "datetime"
            df_long = df_wide.reset_index(drop=False).melt(
                id_vars=['datetime'], var_name='series_id', value_name='value'
            )
            return df_long
    
    
    def load_sine(long=False, shape=None, start_date: str = "2021-01-01"):
        """Create a dataset of just zeroes for testing edge case."""
        if shape is None:
            shape = (500, 5)
        df_wide = pd.DataFrame(
            np.ones(shape),
            index=pd.date_range(start_date, periods=shape[0], freq="D"),
            columns=range(shape[1])
        )
        X = pd.to_numeric(df_wide.index, errors='coerce', downcast='integer').values
    
        def sin_func(a, X):
            return a * np.sin(1 * X) + a
        for column in df_wide.columns:
            df_wide[column] = sin_func(column, X)
        if not long:
            return df_wide
        else:
            df_wide.index.name = "datetime"
            df_long = df_wide.reset_index(drop=False).melt(
                id_vars=['datetime'], var_name='series_id', value_name='value'
            )
            return df_long
    
    
    def predict_reservoir(df, forecast_length, warmup_pts, k=2, ridge_param=2.5e-6):
        # k =  # number of time delay taps
        # pass in traintime_pts to limit as .tail() for huge datasets?
    
        n_pts = df.shape[1]
        # handle short data edge case
        min_train_pts = 10
        max_warmup_pts = n_pts - min_train_pts
        if warmup_pts >= max_warmup_pts:
            warmup_pts = max_warmup_pts if max_warmup_pts > 0 else 0
    
        traintime_pts = n_pts - warmup_pts   # round(traintime / dt)
        warmtrain_pts = warmup_pts + traintime_pts
        testtime_pts = forecast_length + 1  # round(testtime / dt)
        maxtime_pts = n_pts  # round(maxtime / dt)
    
        # input dimension
        d = df.shape[0]
        # size of the linear part of the feature vector
        dlin = k * d
        # size of nonlinear part of feature vector
        dnonlin = int(dlin * (dlin + 1) / 2)
        # total size of feature vector: constant + linear + nonlinear
        dtot = 1 + dlin + dnonlin
    
        # create an array to hold the linear part of the feature vector
        x = np.zeros((dlin, maxtime_pts))
    
        # fill in the linear part of the feature vector for all times
        for delay in range(k):
            for j in range(delay, maxtime_pts):
                x[d * delay : d * (delay + 1), j] = df[:, j - delay]
    
        # create an array to hold the full feature vector for training time
        # (use ones so the constant term is already 1)
        out_train = np.ones((dtot, traintime_pts))
    
        # copy over the linear part (shift over by one to account for constant)
        out_train[1 : dlin + 1, :] = x[:, warmup_pts - 1 : warmtrain_pts - 1]
    
        # fill in the non-linear part
        cnt = 0
        for row in range(dlin):
            for column in range(row, dlin):
                # shift by one for constant
                out_train[dlin + 1 + cnt] = (
                    x[row, warmup_pts - 1 : warmtrain_pts - 1]
                    * x[column, warmup_pts - 1 : warmtrain_pts - 1]
                )
                cnt += 1
    
        # ridge regression: train W_out to map out_train to Lorenz[t] - Lorenz[t - 1]
        W_out = (
            (x[0:d, warmup_pts:warmtrain_pts] - x[0:d, warmup_pts - 1 : warmtrain_pts - 1])
            @ out_train[:, :].T
            @ np.linalg.pinv(
                out_train[:, :] @ out_train[:, :].T + ridge_param * np.identity(dtot)
            )
        )
    
        # create a place to store feature vectors for prediction
        out_test = np.ones(dtot)  # full feature vector
        x_test = np.zeros((dlin, testtime_pts))  # linear part
    
        # copy over initial linear feature vector
        x_test[:, 0] = x[:, warmtrain_pts - 1]
    
        # do prediction
        for j in range(testtime_pts - 1):
            # copy linear part into whole feature vector
            out_test[1 : dlin + 1] = x_test[:, j]  # shift by one for constant
            # fill in the non-linear part
            cnt = 0
            for row in range(dlin):
                for column in range(row, dlin):
                    # shift by one for constant
                    out_test[dlin + 1 + cnt] = x_test[row, j] * x_test[column, j]
                    cnt += 1
            # fill in the delay taps of the next state
            x_test[d:dlin, j + 1] = x_test[0 : (dlin - d), j]
            # do a prediction
            x_test[0:d, j + 1] = x_test[0:d, j] + W_out @ out_test[:]
        return x_test[0:d, 1:]
    
    
    # note transposed from the opposite of my usual shape
    data_pts = 7000
    series = 3
    forecast_length = 10
    df_sine = load_sine(long=False, shape=(data_pts, series)).transpose().to_numpy()
    df_sine_train = df_sine[:, :-10]
    df_sine_test = df_sine[:, -10:]
    prediction_sine = predict_reservoir(df_sine_train, forecast_length=forecast_length, warmup_pts=150, k=2, ridge_param=2.5e-6)
    print(f"sine MAE {np.mean(np.abs(df_sine_test - prediction_sine))}")
    
    df_linear = load_linear(long=False, shape=(data_pts, series)).transpose().to_numpy()
    df_linear_train = df_linear[:, :-10]
    df_linear_test = df_linear[:, -10:]
    prediction_linear = predict_reservoir(df_linear_train, forecast_length=forecast_length, warmup_pts=150, k=2, ridge_param=2.5e-6)
    print(f"linear MAE {np.mean(np.abs(df_linear_test - prediction_linear))}")
    
    
    opened by winedarksea 2
  • Link to your paper

    Link to your paper

    I'm documenting here the link to your paper. I couldn't find it in the readme:


    Next generation reservoir computing

    Daniel J. Gauthier, Erik Bollt, Aaron Griffith & Wendson A. S. Barbosa 
    

    Nature Communications volume 12, Article number: 5564 (2021) https://www.nature.com/articles/s41467-021-25801-2

    opened by impredicative 1
Releases(v1.0)
Owner
OSU QuantInfo Lab
Daniel Gauthier's Research Group
OSU QuantInfo Lab
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 08, 2022
Differential fuzzing for the masses!

NEZHA NEZHA is an efficient and domain-independent differential fuzzer developed at Columbia University. NEZHA exploits the behavioral asymmetries bet

147 Dec 05, 2022
A program to recognize fruits on pictures or videos using yolov5

Yolov5 Fruits Detector Requirements Either Linux or Windows. We recommend Linux for better performance. Python 3.6+ and PyTorch 1.7+. Installation To

Fateme Zamanian 30 Jan 06, 2023
CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

This is the official repository of the paper: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability A private copy of the

Fadi Boutros 33 Dec 31, 2022
Wav2Vec for speech recognition, classification, and audio classification

Soxan در زبان پارسی به نام سخن This repository consists of models, scripts, and notebooks that help you to use all the benefits of Wav2Vec 2.0 in your

Mehrdad Farahani 140 Dec 15, 2022
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations

Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yu

UT-Austin Robot Perception and Learning Lab 63 Jan 03, 2023
Temporal Knowledge Graph Reasoning Triggered by Memories

MTDM Temporal Knowledge Graph Reasoning Triggered by Memories To alleviate the time dependence, we propose a memory-triggered decision-making (MTDM) n

4 Sep 25, 2022
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022
An implementation of an abstract algebra for music tones (pitches).

nbdev template Use this template to more easily create your nbdev project. If you are using an older version of this template, and want to upgrade to

Open Music Kit 0 Oct 10, 2022
CVPR2022 (Oral) - Rethinking Semantic Segmentation: A Prototype View

Rethinking Semantic Segmentation: A Prototype View Rethinking Semantic Segmentation: A Prototype View, Tianfei Zhou, Wenguan Wang, Ender Konukoglu and

Tianfei Zhou 239 Dec 26, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

DeCLIP Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm. Our paper is available in arxiv Updates ** Ou

Sense-GVT 470 Dec 30, 2022
Convolutional neural network that analyzes self-generated images in a variety of languages to find etymological similarities

This project is a convolutional neural network (CNN) that analyzes self-generated images in a variety of languages to find etymological similarities. Specifically, the goal is to prove that computer

1 Feb 03, 2022
DAT4 - General Assembly's Data Science course in Washington, DC

DAT4 Course Repository Course materials for General Assembly's Data Science course in Washington, DC (12/15/14 - 3/16/15). Instructors: Sinan Ozdemir

Kevin Markham 779 Dec 25, 2022
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Gengshan Yang 157 Nov 21, 2022
Vehicles Counting using YOLOv4 + DeepSORT + Flask + Ngrok

A project for counting vehicles using YOLOv4 + DeepSORT + Flask + Ngrok

Duong Tran Thanh 37 Dec 16, 2022
[ACM MM2021] MGH: Metadata Guided Hypergraph Modeling for Unsupervised Person Re-identification

Introduction This project is developed based on FastReID, which is an ongoing ReID project. Projects BUC In projects/BUC, we implement AAAI 2019 paper

WuYiming 7 Apr 13, 2022
Generate Contextual Directory Wordlist For Target Org

PathPermutor Generate Contextual Directory Wordlist For Target Org This script generates contextual wordlist for any target org based on the set of UR

8 Jun 23, 2021
General Assembly Capstone: NBA Game Predictor

Project 6: Predicting NBA Games Problem Statement Can I predict the results of NBA games from the back-half of a season from the opening half of the s

Adam Muhammad Klesc 1 Jan 14, 2022