The Turing Change Point Detection Benchmark: An Extensive Benchmark Evaluation of Change Point Detection Algorithms on real-world data

Overview

Turing Change Point Detection Benchmark

Reproducible Research DOI

Welcome to the repository for the Turing Change Point Detection Benchmark, a benchmark evaluation of change point detection algorithms developed at The Alan Turing Institute. This benchmark uses the time series from the Turing Change Point Dataset (TCPD).

Useful links:

If you encounter a problem when using this repository or simply want to ask a question, please don't hesitate to open an issue on GitHub or send an email to gertjanvandenburg at gmail dot com.

Introduction

Change point detection focuses on accurately detecting moments of abrupt change in the behavior of a time series. While many methods for change point detection exist, past research has paid little attention to the evaluation of existing algorithms on real-world data. This work introduces a benchmark study and a dataset (TCPD) that are explicitly designed for the evaluation of change point detection algorithms. We hope that our work becomes a proving ground for the comparison and development of change point detection algorithms that work well in practice.

This repository contains the code necessary to evaluate and analyze a significant number of change point detection algorithms on the TCPD, and serves to reproduce the work in Van den Burg and Williams (2020). Note that work based on either the dataset or this benchmark should cite that paper:

@article{vandenburg2020evaluation,
        title={An Evaluation of Change Point Detection Algorithms},
        author={{Van den Burg}, G. J. J. and Williams, C. K. I.},
        journal={arXiv preprint arXiv:2003.06222},
        year={2020}
}

For the experiments we've used the abed command line program, which makes it easy to organize and run the experiments. This means that all experiments are defined through the abed_conf.py file. In particular, the hyperparameters and the command line arguments to all methods are defined in that file. Next, all methods are called as command line scripts and they are defined in the execs directory. The raw results from the experiments are collected in JSON files and placed in the abed_results directory, organized by dataset and method. Finally, we use Make to coordinate our analysis scripts: first we generate summary files using summarize.py, and then use these to generate all the tables and figures in the paper.

Getting Started

This repository contains all the code to generate the results (tables/figures/constants) from the paper, as well as to reproduce the experiments entirely. You can either install the dependencies directly on your machine or use the provided Dockerfile (see below). If you don't use Docker, first clone this repository using:

$ git clone --recurse-submodules https://github.com/alan-turing-institute/TCPDBench

Generating Tables/Figures

Generating the tables and figures from the paper is done through the scripts in analysis/scripts and can be run through the provided Makefile.

First make sure you have all requirements:

$ pip install -r ./analysis/requirements.txt

and then use make:

$ make results

The results will be placed in ./analysis/output. Note that to generate the figures a working LaTeX and latexmk installation is needed.

Reproducing the experiments

To fully reproduce the experiments, some additional steps are needed. Note that the Docker procedure outlined below automates this process somewhat.

First, obtain the Turing Change Point Dataset and follow the instructions provided there. Copy the dataset files to a datasets directory in this repository.

To run all the tasks we use the abed command line tool. This allows us to define the experiments in a single configuration file (abed_conf.py) and makes it easy to keep track of which tasks still need to be run.

Note that this repository contains all the result files, so it is not necessary to redo all the experiments. If you still wish to do so, the instructions are as follows:

  1. Move the current result directory out of the way:

    $ mv abed_results old_abed_results
    
  2. Install abed. This requires an existing installation of openmpi, but otherwise should be a matter of running:

    $ pip install abed
    
  3. Tell abed to rediscover all the tasks that need to be done:

    $ abed reload_tasks
    

    This will populate the abed_tasks.txt file and will automatically commit the updated file to the Git repository. You can show the number of tasks that need to be completed through:

    $ abed status
    
  4. Initialize the virtual environments for Python and R, which installs all required dependencies:

    $ make venvs
    

    Note that this will also create an R virtual environment (using RSimpleVenv), which ensures that the exact versions of the packages used in the experiments will be installed. This step can take a little while (), but is important to ensure reproducibility.

  5. Run abed through mpiexec, as follows:

    $ mpiexec -np 4 abed local
    

    This will run abed using 4 cores, which can of course be increased or decreased if desired. Note that a minimum of two cores is needed for abed to operate. You may want to run these experiments in parallel on a large number of cores, as the expected runtime is on the order of 21 days on a single core. Once this command starts running the experiments you will see result files appear in the staging directory.

Running the experiments with Docker

If you like to use Docker to manage the environment and dependencies, you can do so easily with the provided Dockerfile. You can build the Docker image using:

$ docker build -t alan-turing-institute/tcpdbench github.com/alan-turing-institute/TCPDBench

To ensure that the results created in the docker container persist to the host, we need to create a volume first (following these instructions):

$ mkdir /path/to/tcpdbench/results     # *absolute* path where you want the results
$ docker volume create --driver local \
                       --opt type=none \
                       --opt device=/path/to/tcpdbench/results \
                       --opt o=bind tcpdbench_vol

You can then follow the same procedure as described above to reproduce the experiments, but using the relevant docker commands to run them in the container:

  • For reproducing just the tables and figures, use:

    $ docker run -i -t -v tcpdbench_vol:/TCPDBench alan-turing-institute/tcpdbench /bin/bash -c "make results"
    
  • For reproducing all the experiments, use:

    $ docker run -i -t -v tcpdbench_vol:/TCPDBench alan-turing-institute/tcpdbench /bin/bash -c "mv abed_results old_abed_results && mkdir abed_results && abed reload_tasks && abed status && make venvs && mpiexec --allow-run-as-root -np 4 abed local && make results"
    

    where -np 4 sets the number of cores used for the experiments to four. This can be changed as desired to increase efficiency.

Extending the Benchmark

It should be relatively straightforward to extend the benchmark with your own methods and datasets. Remember to cite our paper if you do end up using this work.

Adding a new method

To add a new method to the benchmark, you'll need to write a script in the execs folder that takes a dataset file as input and computes the change point locations. Currently the methods are organized by language (R and python), but you don't necessarily need to follow this structure when adding a new method. Please do check the existing code for inspiration though, as adding a new method is probably easiest when following the same structure.

Experiments are managed using the abed command line application. This facilitates running all the methods with all their hyperparameter settings on all datasets.

Note that currently the methods print the output file to stdout, so if you want to print from your script, use stderr.

Python

When adding a method in Python, you can start with the cpdbench_zero.py file as a template, as this contains most of the boilerplate code. A script should take command line arguments where -i/--input marks the path to a dataset file and optionally can take further command line arguments for hyperparameter settings. Specifying these items from the command line facilitates reproducibility.

Roughly, the main function of a Python method could look like this:

# Adding a new Python method to CPDBench

def main():
  args = parse_args()

  # data is the raw dataset dictionary, mat is a T x d matrix of observations
  data, mat = load_dataset(args.input)

  # set algorithm parameters that are not varied in the grid search
  defaults = {
    'param_1': value_1,
    'param_2': value_2
  }

  # combine command line arguments with defaults
  parameters = make_param_dict(args, defaults)

  # start the timer
  start_time = time.time()
  error = None
  status = 'fail' # if not overwritten, it must have failed

  # run the algorithm in a try/except
  try:
      locations = your_custom_method(mat, parameters)
      status = 'success'
  except Exception as err:
      error = repr(err)

  stop_time = time.time()
  runtime = stop_time - start_time

  # exit with error if the run failed
  if status == 'fail':
    exit_with_error(data, args, parameters, error, __file__)

  # make sure locations are 0-based and integer!

  exit_success(data, args, parameters, locations, runtime, __file__)

Remember to add the following to the bottom of the script so it can be run from the command line:

if __name__ == '__main__':
  main()

If you need to add a timeout to your method, take a look at the BOCPDMS example.

R

Adding a method implemented in R to the benchmark can be done similarly to how it is done for Python. Again, the input file path and the hyperparameters are specified by command line arguments, which are parsed using argparse. For R scripts we use a number of utility functions in the utils.R file. To reliably load this file you can use the load.utils() function available in all R scripts.

The main function of a method implemented in R could be roughly as follows:

main <- function()
{
  args <- parse.args()

  # load the data
  data <- load.dataset(args$input)

  # create list of default algorithm parameters
  defaults <- list(param_1=value_1, param_2=value_2)

  # combine defaults and command line arguments
  params <- make.param.list(args, defaults)

  # Start the timer
  start.time <- Sys.time()

  # call the detection function in a tryCatch
  result <- tryCatch({
    locs <- your.custom.method(data$mat, params)
    list(locations=locs, error=NULL)
  }, error=function(e) {
    return(list(locations=NULL, error=e$message))
  })

  stop.time <- Sys.time()

  # Compute runtime, note units='secs' is not optional!
  runtime <- difftime(stop.time, start.time, units='secs')

  if (!is.null(result$error))
    exit.with.error(data$original, args, params, result$error)

  # convert result$locations to 0-based if needed

  exit.success(data$original, args, params, locations, runtime)
}

Remember to add the following to the bottom of the script so it can be run from the command line:

load.utils()
main()

Adding the method to the experimental configuration

When you've written the command line script to run your method and verified that it works correctly, it's time to add it to the experiment configuration. For this, we'll have to edit the abed_conf.py file.

  1. To add your method, located the METHODS list in the configuration file and add an entry best_<yourmethod> and default_<yourmethod>, replacing <yourmethod> with the name of your method (without spaces or underscores).
  2. Next, add the method to the PARAMS dictionary. This is where you specify all the hyperparameters that your method takes (for the best experiment). The hyperparameters are specified with a name and a list of values to explore (see the current configuration for examples). For the default experiment, add an entry "default_<yourmethod>" : {"no_param": [0]}. This ensures it will be run without any parameters.
  3. Finally, add the command that needs to be executed to run your method to the COMMANDS dictionary. You'll need an entry for best_<yourmethod> and for default_<yourmethod>. Please use the existing entries as examples. Methods implemented in R are run with Rscript. The {execdir}, {datadir}, and {dataset} values will be filled in by abed based on the other settings. Use curly braces to specify hyperparameters, matching the names of the fields in the PARAMS dictionary.

Dependencies

If your method needs external R or Python packages to operate, you can add them to the respective dependency lists.

  • For R, simply add the package name to the Rpackages.txt file. Next, run make clean_R_venv and make R_venv to add the package to the R virtual environment. It is recommended to be specific in the version of the package you want to use in the Rpackages.txt file, for future reference and reproducibility.
  • For Python, individual methods use individual virtual environments, as can be seen from the bocpdms and rbocpdms examples. These virtual environments need to be activated in the COMMANDS section of the abed_conf.py file. Setting up these environments is done through the Makefile. Simply add a requirements.txt file in your package similarly to what is done for bocpdms and rbocpdms, copy and edit the corresponding lines in the Makefile, and run make venv_<yourmethod> to build the virtual environment.

Running experiments

When you've added the method and set up the environment, run

$ abed reload_tasks

to have abed generate the new tasks for your method (see above under Getting Started). Note that abed automatically does a Git commit when you do this, so you may want to switch to a separate branch. You can see the tasks that abed has generated (and thus the command that will be executed) using the command:

$ abed explain_tbd_tasks

If you're satisfied with the commands, you can run the experiments using:

$ mpiexec -np 4 abed local

You can subsequently use the Makefile to generate updated figures and tables with your method or dataset.

Adding a new dataset

To add a new dataset to the benchmark you'll need both a dataset file (in JSON format) and annotations (for evaluation). More information on how the datasets are constructed can be found in the TCPD repository, which also includes a schema file. A high-level overview is as follows:

  • Each dataset has a short name in the name field and a longer more descriptive name in the longname field. The name field must be unique.
  • The number of observations and dimensions is defined in the n_obs and n_dim fields.
  • The time axis is defined in the time field. This has at least an index field to mark the indices of each data point. At the moment, these indices need to be consecutive integers. This entry mainly exist for a future scenario where we may want to consider non-consecutive timesteps. If the time axis can be mapped to a date or time, then a type and format of this field can be specified (see e.g. the nile dataset, which has year labels).
  • The actual observations are specified in the series field. This is an ordered list of JSON objects, one for each dimension. Every dimension has a label, a data type, and a "raw" field with the actual observations. Missing values in the time series can be marked with null (see e.g. uk_coal_employ for an example).
  • The wrapper around Prophet uses the formatted time (for instance YYYY-MM-DD) where available, since Prophet can use this to determine seasonality components. Thus it is recommended to add formatted timesteps to the raw field in the time object if possible (see, e.g., the brent_spot dataset). If this is not available, the time series name should be added to the NO.DATETIME variable in the Prophet wrapper here.

If you want to evaluate the methods in the benchmark on a new dataset, you may want to collect annotations for the dataset. These annotations can be collected in the annotations.json file, which is an object that maps each dataset name to a map from the annotator ID to the marked change points. You can collect annotations using the annotation tool created for this project.

Finally, add your method to the DATASETS field in the abed_conf.py file. Proceed with running the experiments as described above.

License

The code in this repository is licensed under the MIT license, unless otherwise specified. See the LICENSE file for further details. Reuse of the code in this repository is allowed, but should cite our paper.

Notes

If you find any problems or have a suggestion for improvement of this repository, please let us know as it will help us make this resource better for everyone. You can open an issue on GitHub or send an email to gertjanvandenburg at gmail dot com.

Comments
  • The 'prophet' method is ignored by abed

    The 'prophet' method is ignored by abed

    Hello, I am using a modified version of TCPDBench, but with a similar abed_conf.py file. I added other methods, but did not delete old ones. The best/default methods of "prohet" are in the METHODS list and the commands are defined in COMMANDS and PARAMS.

    The problem: In my "abed_results" folder, there are no prophet folders or results for any of my datasets. Therefore its line in the aggregated wide latex table is empty (that's how I recognized the error).

    I didn't notice any prophet-related errors during runtime of the experiments, is there a bug that prevents prophet from running? Best regards, Simon

    opened by simontrapp 5
  • Issue with cpdbench_bocpdms.py on custom dataset

    Issue with cpdbench_bocpdms.py on custom dataset

    So when I run the benchmark suite with my own dataset, some of the methods don't succeed. I am still looking through to see what failed, but there is one thing that is causing issues with generating the summary file when I run "make summary".

    This is the output from abed for default bocpdms on my dataset:

    operands could not be broadcast together with shapes (1014,) (1013,) 
    log model posteriors: [-1.60943791e+000 -1.71559294e+000 -1.28561674e+000 ... -1.08929648e+308
      0.00000000e+000 -4.79582130e+304]
    log model posteriors shape: (1013,)
    {
    	"command": "/TCPDBench/execs/python/cpdbench_bocpdms.py -i /TCPDBench/datasets/driver_scores.json --intensity 100 --prior-a 1.0 --prior-b 1.0 --threshold 0",
    	"dataset": "driver_scores",
    	"dataset_md5": "e342488cf23a6d82985d52ef729d526e",
    	"error": "UnboundLocalError(\"local variable 'growth_log_probabilities' referenced before assignment\")",
    	"hostname": "3e187210786d",
    	"parameters": {
    		"S1": 1,
    		"S2": 1,
    		"intensity": 100.0,
    		"intercept_grouping": null,
    		"lower_AR": 1,
    		"prior_a": 1.0,
    		"prior_b": 1.0,
    		"prior_mean_scale": 0,
    		"prior_var_scale": 1,
    		"threshold": 0,
    		"upper_AR": 5,
    		"use_timeout": false
    	},
    	"result": {
    		"cplocations": null,
    		"runtime": null
    	},
    	"script": "/TCPDBench/execs/python/cpdbench_bocpdms.py",
    	"script_md5": "c1be8d2c933f41a6d0396d86002c6f6f",
    	"status": "FAIL"
    }
    

    The extra output at the top is preventing summarize.py from parsing the json result correctly. Also, some of those log model posterior values are really big, I am not sure if that is correct.

    opened by jayschauer 4
  • Python lib

    Python lib

    Hi,

    Thanks for this repo and your article. They are very cute! It would also be nice if you created a python library that provides the ability to run all the CPD methods from your article on custom data.

    opened by hushchyn-mikhail 4
  • Got this error when ran the docker file

    Got this error when ran the docker file

    Installing collected packages: pip, setuptools Attempting uninstall: pip Found existing installation: pip 20.0.2 Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr Can't uninstall 'pip'. No files were found to uninstall. Attempting uninstall: setuptools Found existing installation: setuptools 45.2.0 Not uninstalling setuptools at /usr/lib/python3/dist-packages, outside environment /usr Can't uninstall 'setuptools'. No files were found to uninstall. Successfully installed pip-20.1 setuptools-46.4.0 ln: failed to create symbolic link 'pip': File exists The command '/bin/sh -c apt-get install -y --no-install-recommends python3 python3-dev python3-tk python3-pip && pip3 install --no-cache-dir --upgrade pip setuptools && echo "alias python='python3'" >> /root/.bash_aliases && echo "alias pip='pip3'" >> /root/.bash_aliases && cd /usr/local/bin && ln -s /usr/bin/python3 python && cd /usr/local/bin && ln -s /usr/bin/pip3 pip && pip install virtualenv abed' returned a non-zero code: 1

    opened by ashutosh1807 3
  • Just a question

    Just a question

    Hello,

    for my research, I am planning to extend the benchmark with my own CPD algorithm and new data sets. Will this again trigger an execution of the existing algorithms on the existing data sets, or will the “missing” values just be added and included in creating the output files? And when this is not the case, will this again trigger an optimization of existing algorithms on existing data sets?

    Lastly, will an execution of existing algorithms happen, if I just change the annotations for existing data sets? Thanks in advance.

    Moritz

    opened by moritzteichner 2
  • Error running docker image on Ubuntu 20.04

    Error running docker image on Ubuntu 20.04

    When I run the docker image to reproduce the experiments, I get an error. I think a dependency is missing, as listed by this line:

    REQUIRED DEPENDENCIES AND EXTENSIONS
                         numpy: yes [not found. pip may install it below.]
              install_requires: yes [handled by setuptools]
                        libagg: yes [pkg-config information for 'libagg' could not
                                be found. Using local copy.]
                      **freetype: no  [The C/C++ header for freetype2 (ft2build.h)
                                could not be found.  You may need to install the
                                development package.]**
                           png: yes [version 1.6.37]
                         qhull: yes [pkg-config information for 'libqhull' could not
                                be found. Using local copy.]
    

    But in case you want to full output, here it is:

    100%|██████████| 42/42 [00:00<00:00, 67494.55it/s]Task update removed 0 completed tasks. Tasks remaining: 41580
    Written task file to abed_tasks.txt
    
    There are 41580 tasks left to be done, out of 41580 tasks defined.
    cd execs/python/bocpdms && virtualenv venv && source venv/bin/activate && pip install -r requirements.txt
    created virtual environment CPython3.8.2.final.0-64 in 319ms
      creator CPython3Posix(dest=/TCPDBench/execs/python/bocpdms/venv, clear=False, global=False)
      seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv)
        added seed packages: pip==20.2.1, setuptools==49.2.1, wheel==0.34.2
      activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
    Collecting scipy==1.1.0
      Downloading scipy-1.1.0.tar.gz (15.6 MB)
    Collecting matplotlib==2.2.2
      Downloading matplotlib-2.2.2.tar.gz (37.3 MB)
        ERROR: Command errored out with exit status 1:
         command: /TCPDBench/execs/python/bocpdms/venv/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-idi0i_6h/matplotlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-idi0i_6h/matplotlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-b3xnf8ei
             cwd: /tmp/pip-install-idi0i_6h/matplotlib/
        Complete output (61 lines):
        ============================================================================
        Edit setup.cfg to change the build options
        
        BUILDING MATPLOTLIB
                    matplotlib: yes [2.2.2]
                        python: yes [3.8.2 (default, Jul 16 2020, 14:00:26)  [GCC
                                9.3.0]]
                      platform: yes [linux]
        
        REQUIRED DEPENDENCIES AND EXTENSIONS
                         numpy: yes [not found. pip may install it below.]
              install_requires: yes [handled by setuptools]
                        libagg: yes [pkg-config information for 'libagg' could not
                                be found. Using local copy.]
                      freetype: no  [The C/C++ header for freetype2 (ft2build.h)
                                could not be found.  You may need to install the
                                development package.]
                           png: yes [version 1.6.37]
                         qhull: yes [pkg-config information for 'libqhull' could not
                                be found. Using local copy.]
        
        OPTIONAL SUBPACKAGES
                   sample_data: yes [installing]
                      toolkits: yes [installing]
                         tests: no  [skipping due to configuration]
                toolkits_tests: no  [skipping due to configuration]
        
        OPTIONAL BACKEND EXTENSIONS
                        macosx: no  [Mac OS-X only]
                        qt5agg: no  [PySide2 not found; PyQt5 not found]
                        qt4agg: no  [PySide not found; PyQt4 not found]
                       gtk3agg: no  [Requires pygobject to be installed.]
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-install-idi0i_6h/matplotlib/setup.py", line 197, in <module>
            msg = pkg.install_help_msg()
          File "/tmp/pip-install-idi0i_6h/matplotlib/setupext.py", line 592, in install_help_msg
            release = platform.linux_distribution()[0].lower()
        AttributeError: module 'platform' has no attribute 'linux_distribution'
                     gtk3cairo: no  [Requires cairocffi or pycairo to be installed.]
                        gtkagg: no  [Requires pygtk]
                         tkagg: yes [installing; run-time loading from Python Tcl /
                                Tk]
                         wxagg: no  [requires wxPython]
                           gtk: no  [Requires pygtk]
                           agg: yes [installing]
                         cairo: no  [cairocffi or pycairo not found]
                     windowing: no  [Microsoft Windows only]
        
        OPTIONAL LATEX DEPENDENCIES
                        dvipng: no
                   ghostscript: no
                         latex: yes [version 3.14159265]
                       pdftops: no
        
        OPTIONAL PACKAGE DATA
                          dlls: no  [skipping due to configuration]
        
        ============================================================================
                                * The following required packages can not be built:
                                * freetype
        ----------------------------------------
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    WARNING: You are using pip version 20.2.1; however, version 20.2.2 is available.
    You should consider upgrading via the '/TCPDBench/execs/python/bocpdms/venv/bin/python -m pip install --upgrade pip' command.
    make: *** [Makefile:358: execs/python/bocpdms/venv] Error 1
    
    opened by jayschauer 1
  • Install R packages with apt where possible

    Install R packages with apt where possible

    These are all dependencies of the packages we actually use, so we still maintain the correct (fixed) versions of those packages.

    This aims to reduce the time needed to build the R virtual environment.

    opened by GjjvdBurg 0
  • Test building virtualenvs on Travis

    Test building virtualenvs on Travis

    This adds a command to the Travis configuration that tests whether the virtual environments can be build correctly. This will hopefully catch issues such as those reported #5 automatically.

    opened by GjjvdBurg 0
Releases(v3.0)
  • v3.0(Feb 26, 2022)

    This is an updated release of the Turing Change Point Benchmark, corresponding to v3 of the arXiv paper: https://arxiv.org/abs/2003.06222. The Turing Change Point Benchmark is a benchmark study of change point detection algorithms using real-world time series.

    A high-level overview of the changes is as follows:

    • Expanded the grid search for some methods in the Oracle experiment
    • Changed from rank plots to critical-difference diagrams
    • Added additional analysis of annotator agreement
    • Various minor code changes and improvements
    Source code(tar.gz)
    Source code(zip)
  • v2.0(May 26, 2020)

    This is an update release of the Turing Change Point Benchmark, corresponding to v2 of the arXiv paper: https://arxiv.org/abs/2003.06222. The Turing Change Point Benchmark is a benchmark study of change point detection algorithms using real-world time series.

    This update release makes the following changes:

    • Added the "zero" baseline method
    • Added a script to compute summary statistics
    • Added rank plots for multivariate datasets
    • Corrected an error in the computation of the F1 score and updated the results. This correction had no major effect on the conclusions of the paper.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Apr 5, 2020)

    This is the first official release of the Turing Change Point Benchmark, a benchmark study of change point detection algorithms using real-world time series. For more information, see: https://arxiv.org/abs/2003.06222

    This version adds an explicit license file, which was accidentally omitted in v1.0.0.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Apr 5, 2020)

    This is the first official release of the Turing Change Point Benchmark, a benchmark study of change point detection algorithms using real-world time series. For more information, see: https://arxiv.org/abs/2003.06222

    Source code(tar.gz)
    Source code(zip)
Owner
The Alan Turing Institute
The UK's national institute for data science and artificial intelligence.
The Alan Turing Institute
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Claims.

MTM This is the official repository of the paper: Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Cla

ICTMCG 13 Sep 17, 2022
🐸STT integration examples

🐸 STT 0.9.x Examples These are various examples on how to use or integrate 🐸 STT using our packages. It is a good way to just try out 🐸 STT before

coqui 92 Dec 19, 2022
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022
Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021)

RSCD (BS-RSCD & JCD) Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021) by Zhihang Zhong, Yinqiang Zheng, Imari Sato We co

81 Dec 15, 2022
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

EMI-Group 175 Dec 30, 2022
Benchmark tools for Compressive LiDAR-to-map registration

Benchmark tools for Compressive LiDAR-to-map registration This repo contains the released version of code and datasets used for our IROS 2021 paper: "

Allie 9 Nov 24, 2022
Recurrent Conditional Query Learning

Recurrent Conditional Query Learning (RCQL) This repository contains the Pytorch implementation of One Model Packs Thousands of Items with Recurrent C

Dongda 4 Nov 28, 2022
InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images Hong Wang, Yuexiang Li, Haimiao Zhang, Deyu Men

Hong Wang 4 Dec 27, 2022
FeTaQA: Free-form Table Question Answering

FeTaQA: Free-form Table Question Answering FeTaQA is a Free-form Table Question Answering dataset with 10K Wikipedia-based {table, question, free-form

Language, Information, and Learning at Yale 40 Dec 13, 2022
Underwater image enhancement

LANet Our work proposes an adaptive learning attention network (LANet) to solve the problem of color casts and low illumination in underwater images.

LiuShiBen 7 Sep 14, 2022
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

Sang-gil Lee 241 Nov 18, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
N-Omniglot is a large neuromorphic few-shot learning dataset

N-Omniglot [Paper] || [Dataset] N-Omniglot is a large neuromorphic few-shot learning dataset. It reconstructs strokes of Omniglot as videos and uses D

11 Dec 05, 2022
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Arjun Majumdar 44 Dec 14, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 05, 2023
基于PaddleClas实现垃圾分类,并转换为inference格式用PaddleHub服务端部署

百度网盘链接及提取码: 链接:https://pan.baidu.com/s/1HKpgakNx1hNlOuZJuW6T1w 提取码:wylx 一个垃圾分类项目带你玩转飞桨多个产品(1) 基于PaddleClas实现垃圾分类,导出inference模型并利用PaddleHub Serving进行服务

thomas-yanxin 22 Jul 12, 2022
Code of TIP2021 Paper《SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition》. We provide both MxNet and Pytorch versions.

SFace Code of TIP2021 Paper 《SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition》. We provide both MxNet, PyTorch and Jittor versi

Zhong Yaoyao 47 Nov 25, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021