A Python package for time series augmentation

Overview

tsaug

Build Status Documentation Status Coverage Status PyPI Downloads Code style: black

tsaug is a Python package for time series augmentation. It offers a set of augmentation methods for time series, as well as a simple API to connect multiple augmenters into a pipeline.

See https://tsaug.readthedocs.io complete documentation.

Installation

Prerequisites: Python 3.5 or later.

It is recommended to install the most recent stable release of tsaug from PyPI.

pip install tsaug

Alternatively, you could install from source code. This will give you the latest, but unstable, version of tsaug.

git clone https://github.com/arundo/tsaug.git
cd tsaug/
git checkout develop
pip install ./

Examples

A first-time user may start with two examples:

Examples of every individual augmenter can be found here

For full references of implemented augmentation methods, please refer to References.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

Please see Contributing for more details.

License

tsaug is licensed under the Apache License 2.0. See the LICENSE file for details.

Comments
  • How to cite this repo?

    How to cite this repo?

    Basically the title. I used this awesome repo and I would like to cite this repo in my paper. How to do it. If you could provide a bibtex entry that will be great

    question 
    opened by kowshikthopalli 2
  • Default _Augmentor arguments will raise an error

    Default _Augmentor arguments will raise an error

    While working on #1 I found that the default args for initializing an _Augmentor object could lead to the code trying to call None when expecting a function.

    See: https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L5 https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L6

    and

    https://github.com/arundo/tsaug/blob/ebf1955664991fe51f038a5cc8506f1bfc849d91/src/tsaug/augmentor.py#L47

    I know that it's not intended to be initialized without an augmenter function, function, but I was wondering if you want to explicitly prevent an error here.

    Or is something else supposed to be happening?

    bug 
    opened by roycoding 1
  • can't find the deepad python package

    can't find the deepad python package

    In the quickstart notebook https://github.com/arundo/tsaug/blob/master/docs/quickstart.ipynb from deepad.visualization import plot where can you find the deepad package to install?

    opened by xsqian 1
  • Missing function calls in documentation

    Missing function calls in documentation

    Hi!

    I noticed that documentation is actually missing few important notes.

    For instance, first example contains such snippet:

    >>> import numpy as np
    >>> X = np.load("./X.npy")
    >>> Y = np.load("./Y.npy")
    >>> from tsaug.visualization import plot
    >>> plot(X, Y)
    

    and shows a chart which suggests that it is immediately rendered after calling plot function.

    In configurations I've seen and worked on, plot function does not render any chart immediately. Instead it returns Tuple[matplotlib.figure.Figure, matplotlib.axes._axes.Axes]. This means that we need to take first element of returned tuple and call .show() on it, so this example should rather be:

    >>> import numpy as np
    >>> X = np.load("./X.npy")
    >>> Y = np.load("./Y.npy")
    >>> from tsaug.visualization import plot
    >>> figure, _ = plot(X, Y)
    >>> figure.show()
    

    I can create a push request with such corrections if you're open for contribution

    opened by 15bubbles 0
  • Static random augmentation across multiple time series

    Static random augmentation across multiple time series

    Hello,

    I have a use case where I apply temporal augmentation with the same random anchor across multiple time series within a segmented object. I.e., I want certain augmentations to vary across objects, but remain constant within objects.

    In TimeWarp, e.g., I've added an optional keyword argument (static_rand):

        def __init__(
             self,
             n_speed_change: int = 3,
             max_speed_ratio: Union[float, Tuple[float, float], List[float]] = 3.0,
             repeats: int = 1,
             prob: float = 1.0,
             seed: Optional[int] = _default_seed,
             static_rand: Optional[bool] = False
         ):
    

    which is used by:

             if self.static_rand:                                                                                                                      
                 anchor_values = rand.uniform(low=0.0, high=1.0, size=self.n_speed_change + 1)
                 anchor_values = np.tile(anchor_values, (N, 1))
             else:
                 anchor_values = rand.uniform(
                     low=0.0, high=1.0, size=(N, self.n_speed_change + 1)
                 )
    

    Thus, instead of having N time series with different random anchor_values, I generate N time series with the same anchor value.

    I use this approach with TimeWarp and Drift. Would this be of any interest as a PR, or does it sound too specific?

    Thanks for the nice library.

    opened by jgrss 0
  • _Augmenter should be exposed properly as tsaug.Augmenter

    _Augmenter should be exposed properly as tsaug.Augmenter

    Might be related to https://github.com/arundo/tsaug/issues/1

    In the current state of the package, the _Augmenter class is an internal class that should not be used outside of the package itself... but it's also the base class for all usable classes from tsaug. This makes it very weird to type "generic" functions outside of tsaug, e.g.

    # this should not appear in a normal Python code
    from tsaug._augmenters.base import _Augmenter
    
    def apply_transformation(aug: _Augmenter):
        ...
    

    The _Augmenter class should be exposed as tsaug.Augmenter so that it can be used for proper typing outside of the tsaug package.

    help wanted 
    opened by Holt59 0
  • Equivalence in transformation names

    Equivalence in transformation names

    Hello

    I'm very interested to use and apply Tsaug library in my personal project.

    I have read the paper "Data Augmentation ofWearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks" and I'm quite confused about the name of the transformations.

    What are the equivalent in TSAUG library for the transformations Jittering, Scaling, rotation, permutation, MagWarp mentioned in this paper?

    Also, I have read the blog "https://www.arundo.com/arundo_tech_blog/tsaug-an-open-source-python-package-for-time-series-augmentation", and I didn´t find the equivalent for RandomMagnify, RandomJitter, etc.

    Could you help me with these doubts.

    Best regards

    Oscar

    question 
    opened by ogreyesp 1
  • ValueError: The numbers of series in X and Y are different.

    ValueError: The numbers of series in X and Y are different.

    The shape of X is (54, 337) and the shape of y is (54,). But I am getting error. I am using the following code

    from tsaug import TimeWarp, Crop, Quantize, Drift, Reverse
    my_augmenter = (
        TimeWarp() * 5  # random time warping 5 times in parallel
        + Crop(size=300)  # random crop subsequences with length 300
        + Quantize(n_levels=[10, 20, 30])  # random quantize to 10-, 20-, or 30- level sets
        + Drift(max_drift=(0.1, 0.5)) @ 0.8  # with 80% probability, random drift the signal up to 10% - 50%
        + Reverse() @ 0.5  # with 50% probability, reverse the sequence
    )
    data, labels = my_augmenter.augment(data, labels)
    
    question 
    opened by talhaanwarch 3
  • How to augment multi_variate time series data?

    How to augment multi_variate time series data?

    I noticed that while augmenting multi-variate time series data, augmented data is concatenated on 0 axes, instead of being added to a new axis ie third axis. Let suppose data shape is (18,1000), after augmentation it turns to be (72,1000), but i believe it should be (4,18,1000). simply reshaping data.reshape(4,18,1000) resolve the problem or not?

    question 
    opened by talhaanwarch 2
Releases(v0.2.1)
PyTorch implementation of Algorithm 1 of "On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models"

Code for On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models This repository will reproduce the main results from our pape

Mitch Hill 32 Nov 25, 2022
Code for "3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop"

PyMAF This repository contains the code for the following paper: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop Hongwe

Hongwen Zhang 450 Dec 28, 2022
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
Easy way to add GoogleMaps to Flask applications. maintainer: @getcake

Flask Google Maps Easy to use Google Maps in your Flask application requires Jinja Flask A google api key get here Contribute To contribute with the p

Flask Extensions 611 Dec 05, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
This repository contains PyTorch models for SpecTr (Spectral Transformer).

SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation This repository contains PyTorch models for SpecTr (Spectral Transformer).

Boxiang Yun 45 Dec 13, 2022
Best practices for segmentation of the corporate network of any company

Best-practice-for-network-segmentation What is this? This project was created to publish the best practices for segmentation of the corporate network

2k Jan 07, 2023
Plover-tapey-tape: an alternative to Plover’s built-in paper tape

plover-tapey-tape plover-tapey-tape is an alternative to Plover’s built-in paper

7 May 29, 2022
JittorVis - Visual understanding of deep learning models

JittorVis: Visual understanding of deep learning model JittorVis is an open-source library for understanding the inner workings of Jittor models by vi

thu-vis 182 Jan 06, 2023
Final report with code for KAIST Course KSE 801.

Orthogonal collocation is a method for the numerical solution of partial differential equations

Chuanbo HUA 4 Apr 06, 2022
A new benchmark for Icon Question Answering (IconQA) and a large-scale icon dataset Icon645.

IconQA About IconQA is a new diverse abstract visual question answering dataset that highlights the importance of abstract diagram understanding and c

Pan Lu 24 Dec 30, 2022
Semantic Segmentation of images using PixelLib with help of Pascalvoc dataset trained with Deeplabv3+ framework.

CARscan- Approach 1 - Segmentation of images by detecting contours. It failed because in images with elements along with cars were also getting detect

Padmanabha Banerjee 5 Jul 29, 2021
We will release the code of "ConTNet: Why not use convolution and transformer at the same time?" in this repo

ConTNet Introduction ConTNet (Convlution-Tranformer Network) is proposed mainly in response to the following two issues: (1) ConvNets lack a large rec

93 Nov 08, 2022
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
An Ensemble of CNN (Python 3.5.1 Tensorflow 1.3 numpy 1.13)

An Ensemble of CNN (Python 3.5.1 Tensorflow 1.3 numpy 1.13)

0 May 06, 2022
A clear, concise, simple yet powerful and efficient API for deep learning.

The Gluon API Specification The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for

Gluon API 2.3k Dec 17, 2022
A Gura parser implementation for Python

Gura Python parser This repository contains the implementation of a Gura (compliant with version 1.0.0) format parser in Python. Installation pip inst

Gura Config Lang 19 Jan 25, 2022
This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

Polarized Self-Attention: Towards High-quality Pixel-wise Regression This is an official implementation of: Huajun Liu, Fuqiang Liu, Xinyi Fan and Don

DeLightCMU 212 Jan 08, 2023
Codes for CVPR2021 paper "PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization"

PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization (CVPR 2021) This is the official implementation of PW

Intelligent Robotics and Machine Vision Lab 42 Dec 18, 2022
Churn prediction

Churn-prediction Churn-prediction Data preprocessing:: Label encoder is used to normalize the categorical variable Data Transformation:: For each data

1 Sep 28, 2022