Python library for analysis of time series data including dimensionality reduction, clustering, and Markov model estimation

Overview

deeptime

License: LGPL v3 Build Status codecov

Releases:

Installation via conda recommended.

conda-forge PyPI
conda install -c conda-forge deeptime pip install deeptime

Documentation: deeptime-ml.github.io.

Building the latest trunk version of the package:

Using conda for dependency management and python setup.py:

git clone https://github.com/deeptime-ml/deeptime.git

cd deeptime
git submodule update --init

conda install numpy scipy cython scikit-learn pybind11

python setup.py install

Or using pip:

pip install git+https://github.com/deeptime-ml/[email protected]
Comments
  • VAMPnet partial fit

    VAMPnet partial fit

    I'd like to use VAMPnet for a large amount of data. I'm coming from pyEMMA where managing this is easy thanks to the function pyemma.coordinates.source. I see that deeptime is lacking this function but I do see the partial_fit function in almost all the functions. My problem is how this can be used in VAMPnet? The fit and partial_fit functions seem to do different things: in the first one for instance it is asked also for validation data while the second is satisfied by just the training data, same thing for the number of epochs. Another thing is whether I should fetch the model at the end. Right now I'm trying to do a loop over my data in the following way:

    import torch
    import torch.nn as nn
    import numpy as np
    from deeptime.data import TimeLaggedDataset
    from deeptime.util.torch import MLP
    from torch.utils.data import DataLoader
    from deeptime.decomposition.deep import VAMPNet
    
    lobe = MLP(units=ns, nonlinearity=nn.ReLU)
    vampnet = VAMPNet(lobe=lobe,learning_rate=3)
    # paths is just a list of strings containing the path to .npy data
    for path in paths:
        data = np.load(path)
        dataset = TimeLaggedDataset.from_trajectory(lagtime=500, data=data.astype(np.float32))
        lobe = MLP(units=ns, nonlinearity=nn.ReLU)
        vampnet = VAMPNet(lobe=lobe, learning_rate=1e-4)
        vampnet.partial_fit((dataset.data,dataset.data_lagged))
    model = vampnet.fetch_model()  # I'm pretty sure this is not right at all as most of the code before
    

    Note I'm not using the train_data and the val_data as you did in the documentation since partial_fit doesn't require it, but I'm pretty sure that I should somehow. I think that from the documentation is not clear how you should deal with this kind of problem. Thank you very much for your time

    opened by pl992 14
  • online covariance - example

    online covariance - example

    hi @brookehus !

    This looks like a very exciting library!

    Is there an example of the online covariance calulation.

    Its a problem I am looking at, at the moment.

    Kind regards, Andrew

    opened by andrewczgithub 14
  • Add VAMP, CKTests (MSM and VAMP)

    Add VAMP, CKTests (MSM and VAMP)

    This adds VAMP estimator/model and the infrastructure for lagged model validation (cktests).

    During the path of getting the stuff to work, I noticed that calling fit on an estimator has unexpected side effects. That is why we need to take a copy of it in LaggedModelValidator. The factory pattern however should make the need for this copy unnecessary, but because we encapsulate the current model instance, we can not work around this. @clonker do you think it would be sane to call _create_model upon fit() to avoid this kind of hassle? How would we enforce this behavior without interfering with overridden fit methods?

    opened by marscher 14
  • Installation runs into CUDA problem

    Installation runs into CUDA problem

    The bug reported at https://github.com/rusty1s/pytorch_sparse/issues/180 and https://github.com/pyg-team/pytorch_geometric/issues/4095 propagates to deeptime, too. Fixing PyTorch to older versions in the setup might help.

    opened by MQSchleich 13
  • Cannot install deeptime. I have Python 3.10 (Windows 10) and trying to install using Visual Studio 2022

    Cannot install deeptime. I have Python 3.10 (Windows 10) and trying to install using Visual Studio 2022

    Describe the bug A clear and concise description of what the bug is.

    Here's a quick checklist in what to include:

    • [x] Include a detailed description of the bug or suggestion
    • [x] pip list or conda list of the environment you are using (please attach a txt file to the issue).
    • [x] deeptime and operating system versions
    • [x] Minimal example if possible, a Python script, zipped input data (if not too large)
    opened by panandreou 12
  • Different VAMPNET results from CPU/GPU training

    Different VAMPNET results from CPU/GPU training

    Describe the bug I tried to run the ala2 notebook (https://github.com/deeptime-ml/deeptime-notebooks/blob/master/examples/ala2-example.ipynb) but ended up with quite different results with GPU v.s. CPU training. CPU had a much higher success rate and flat training curves compared to GPU. I am wondering if it is something common or if I had made any mistakes.

    Results I tested with 10 individual runs with the same parameters as in the tutorial notebook.

    • CPU cpu_training cpu_state

    • GPU gpu_training gpu_state

    System CPU: AMD EPYC 7551 GPU: RTX A5000 System: Ubuntu 20.04.1 Python 3.9 torch 1.11.0+cu113 deeptime '0.4.1+8.g38b0158.dirty' (main branch)

    opened by yuxuanzhuang 11
  • Sparse Identification of Nonlinear Dynamics

    Sparse Identification of Nonlinear Dynamics

    This pull request provides a simple implementation of the Sparse Identification of Nonlinear Dynamics (SINDy) method for nonlinear model discovery. It includes

    • A SINDy estimator class for fitting a SINDyModel objects to data
    • A SINDyModel class encapsulating the learned dynamical system
    • Tests for the important methods and use-cases

    The pull request is not quite complete—I still need to create an example showing how to use the new features—but I figured this was a good place for me to give you, the Scikit-time team, a chance to look at implementation. I tried to use existing sktime code as inspiration, but feel free to edit anything that doesn't conform to your standards or style.

    The other thing I'd like to flag is that I wasn't able to test whether the auto-generated documentation all looked okay. I welcome any suggestions on how to improve it.

    opened by briandesilva 11
  • Largest connected set returns a state when it should return an empty set

    Largest connected set returns a state when it should return an empty set

    The following should print the empty set [] but instead prints [3].

    import numpy as np
    from deeptime.markov import TransitionCountEstimator
    
    estimator = TransitionCountEstimator(lagtime=1, count_mode='sliding')
    
    full_counts_model = estimator.fit_fetch(np.asarray([0, 1, 2, 3]))
    submodel = full_counts_model.submodel_largest(
            connectivity_threshold=1,
            directed=True)
    print(submodel.state_symbols)
    
    opened by MaaikeG 10
  • Adding Koopman operator evaluation, ITS evaluation and CK test to VAMPnet module

    Adding Koopman operator evaluation, ITS evaluation and CK test to VAMPnet module

    I contacted you a few days ago regarding the VAMPnet usage. I solved everything but I think there's something missing. From the references it is reported that you could evaluate the ITS and the CKtest straight from the koopman matrix built on the transformed data from your VAMPnet. Starting from these data it isn't difficult to evaluate by your own the koopman operator, the ITS and the Chapmann-Kolmogorov test but for completeness I would ask these functions to be added so that the users could entrust your implementation, for sure much more robust and compact.

    Again congratulations for this huge and extremely well organized library!

    opened by pl992 10
  • TICA fit-once transform-many

    TICA fit-once transform-many

    Hi, in the past with pyEMMA I was able to fit once with TICA and project on different amount of dimensions after. Currently if I use

    tica = TICA(lagtime=20)
    tica.fit(mydata)
    tica.set_params(dim=3)
    tica.transform(mydata)
    

    after fitting it seems to ignore it and just returns the full dimensions (minus correlated features) Is there any way to achieve the same using deeptime? Since fitting takes lots of time it's quite useful to be able to test different number of dimensions without re-fitting the whole model.

    opened by stefdoerr 6
  • Documentation restructuring, some fixes, clustering docs

    Documentation restructuring, some fixes, clustering docs

    • The documentation is now split into apidocs and a more narrative documentation
    • the more narrative documentation is composed out of jupyter notebooks which are converted into html by the nbsphinx sphinx extension
    • added first draft of clustering narrative documentation
    • AMMs and OOMs are now part of the apidocs
    • in mini batch clustering: when calling fit() the previous behavior was falling back to ordinary k-means. Now it takes shuffled samples of the dataset and performs clustering on these mini batches
    opened by clonker 5
  • Allowing weights for VAMP dimensionality reduction

    Allowing weights for VAMP dimensionality reduction

    Is your feature request related to a problem? Please describe. The TICA and VAMP decomposition classes both provide similar interfaces for .fit_from_timeseries(data). However, the TICA class allows a weights argument.

    The VAMP decomposition, however, does not support weights, and throws an error if they're provided (see: https://github.com/deeptime-ml/deeptime/blob/11182accb1f8ce263f7c498b76c94bb657b5a998/deeptime/covariance/util/_running_moments.py#L245 )

    Describe the solution you'd like Support for weights in VAMP.

    I see some similarity between moments_XXXY() and moments_block(), but it seems like there was probably a reason for omitting support for weights from VAMP -- is that correct?

    opened by jdrusso 6
  • Sort Markov matrix

    Sort Markov matrix

    Is your feature request related to a problem? Please describe. I am doing Markov modelling for SAR/QSAR analysis of chemical compounds and would need sorted markov matrices.

    I suggest to sort the Markov matrix according to the most stable state. Something like with better memory management:

    def sort_markov_matrix(markov_matrix): 
        """Takes in random markov matrix
        returns sorted markov matrix 
    
        Args:
            markov_matrix (np.array): unsorted matrix
    
        Returns:
            (np.array): sorted Markov matrix
        """
        
        
        
        b = markov_matrix.copy()
        for i in range(len(markov_matrix)): 
            ref1 = markov_matrix[i,i]
            for j in range(i+1, len(markov_matrix)): 
                ref2 = markov_matrix[j, j]
                if ref2 > ref1: 
                    markov_matrix[i, :] = b[j, :]
                    markov_matrix[j, :] = b[i, :]
                    b = markov_matrix.copy()
                    for k in range(len(markov_matrix)):
                        markov_matrix[k,i] = b[k, j]
                        markov_matrix[k,j] = b[k, i]
                        b = markov_matrix.copy()
        return markov_matrix
    

    Test with

    
    def test_sort(): 
        a = np.array([[0.8, 0.1, 0.05, 0.05],[0.005, 0.9, 0.03, 0.015], [0.1, 0.2, 0.4, 0.3],[0.01, 0.02, 0.03, 0.94]])
        sorted_a = sort_markov_matrix(a)
        assert np.array_equal(sorted_a[0,:], np.array([0.94, 0.02, 0.01, 0.03])) == True, str(sorted_a[0,:])
        assert np.array_equal(sorted_a[1,:], np.array([0.015,0.9, 0.005, 0.03])) == True, str(sorted_a[1,:])
        assert np.array_equal(sorted_a[2,:], np.array([0.05, 0.1, 0.8, 0.05])) == True, str(sorted_a[2,:])
        assert np.array_equal(sorted_a[3,:], np.array([0.3, 0.2, 0.1, 0.4])) == True, str(sorted_a[3,:])
    
    

    What do you think?

    opened by MQSchleich 21
  • Feauture request - Tcorex covariance matrix

    Feauture request - Tcorex covariance matrix

    Hi All!

    love the libray so far.

    I was wondering if I could request a feature request of adding the calculation of the covariance matrix by this method below.

    https://github.com/hrayrhar/T-CorEx

    It does seem difficult in its current format and this api is so much easier to use.

    Kind regards, Andrew

    enhancement 
    opened by andrewczgithub 1
Releases(v0.4.4)
  • v0.4.4(Dec 20, 2022)

    What's Changed

    • remove print by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/257
    • Remove deprecated CK Test validator by @clonker in https://github.com/deeptime-ml/deeptime/pull/258
    • TV derivative and minor documentation fixes by @clonker in https://github.com/deeptime-ml/deeptime/pull/260
    • Lorenz system, Thomas attractor, Improvements regarding documentation, TV Derivative improvements by @clonker in https://github.com/deeptime-ml/deeptime/pull/261
    • Added an optional argument for specifying the tolerance in is_reversi… by @wehs7661 in https://github.com/deeptime-ml/deeptime/pull/262
    • Enable python 3.11 in CI pipelines by @clonker in https://github.com/deeptime-ml/deeptime/pull/263
    • pyproject.toml config by @clonker in https://github.com/deeptime-ml/deeptime/pull/264
    • Update for scikit-learn 1.2 by @clonker in https://github.com/deeptime-ml/deeptime/pull/267
    • cibuildwheel cleanup by @clonker in https://github.com/deeptime-ml/deeptime/pull/268

    New Contributors

    • @wehs7661 made their first contribution in https://github.com/deeptime-ml/deeptime/pull/262

    Full Changelog: https://github.com/deeptime-ml/deeptime/compare/v0.4.3...v0.4.4

    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(Sep 8, 2022)

    What's Changed

    • CI: Update manifest to include npz data files, don't use editable install for nox make_docs by @clonker in https://github.com/deeptime-ml/deeptime/pull/233
    • plot energy surfaces by @clonker in https://github.com/deeptime-ml/deeptime/pull/234
    • Network plots by @clonker in https://github.com/deeptime-ml/deeptime/pull/236
    • plot fluxes by @clonker in https://github.com/deeptime-ml/deeptime/pull/237
    • Fix preprocess data call in SINDy by @clonker in https://github.com/deeptime-ml/deeptime/pull/240
    • Fix shape of empty count models by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/239
    • proper exception handling in trajectory-generating openmp block by @clonker in https://github.com/deeptime-ml/deeptime/pull/242
    • better error handling for trajectories dataset by @clonker in https://github.com/deeptime-ml/deeptime/pull/243
    • [ci] fix python 3.8 by @clonker in https://github.com/deeptime-ml/deeptime/pull/244
    • Convergence warning bugfix by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/245
    • Fix the bug by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/246
    • Contour and density plot utility functions by @clonker in https://github.com/deeptime-ml/deeptime/pull/247
    • py38 by @clonker in https://github.com/deeptime-ml/deeptime/pull/249
    • [setup] exclude examples, docs, devtools when packaging by @clonker in https://github.com/deeptime-ml/deeptime/pull/250
    • Var_cutoff can be disabled in covariance koopman models. Fixes #254. by @clonker in https://github.com/deeptime-ml/deeptime/pull/255

    Full Changelog: https://github.com/deeptime-ml/deeptime/compare/v0.4.2...v0.4.3

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Apr 11, 2022)

    What's Changed

    • new tram notebook version by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/210
    • add device option for TAE class by @philipyoung9561 in https://github.com/deeptime-ml/deeptime/pull/211
    • Docs update by @clonker in https://github.com/deeptime-ml/deeptime/pull/213
    • Use CMake as primary build system via scikit-build by @clonker in https://github.com/deeptime-ml/deeptime/pull/215
    • Tram experiments by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/214
    • use Agg backend in plotting tests by @clonker in https://github.com/deeptime-ml/deeptime/pull/216
    • Fix bug in statdist caching for msms by @clonker in https://github.com/deeptime-ml/deeptime/pull/219
    • Tram experiments by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/217
    • typos in _koopman.py by @KirillShmilovich in https://github.com/deeptime-ml/deeptime/pull/221
    • Disable TF-32 tensor cores for VAMPNET training by @yuxuanzhuang in https://github.com/deeptime-ml/deeptime/pull/222
    • Small doc fixes and update testing dependencies by @clonker in https://github.com/deeptime-ml/deeptime/pull/223
    • pyproject.toml: use dynamic version by @clonker in https://github.com/deeptime-ml/deeptime/pull/226
    • Update versioneer and run cython from CMake by @clonker in https://github.com/deeptime-ml/deeptime/pull/228
    • [datasets] correct 1D triple well docs by @thempel in https://github.com/deeptime-ml/deeptime/pull/229
    • update project metadata by @clonker in https://github.com/deeptime-ml/deeptime/pull/230
    • Fix in setup by @clonker in https://github.com/deeptime-ml/deeptime/pull/231
    • update manifest to include cmakelists by @clonker in https://github.com/deeptime-ml/deeptime/pull/232

    New Contributors

    • @philipyoung9561 made their first contribution in https://github.com/deeptime-ml/deeptime/pull/211
    • @KirillShmilovich made their first contribution in https://github.com/deeptime-ml/deeptime/pull/221
    • @yuxuanzhuang made their first contribution in https://github.com/deeptime-ml/deeptime/pull/222

    Full Changelog: https://github.com/deeptime-ml/deeptime/compare/v0.4.1...v0.4.2

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Feb 16, 2022)

    What's Changed

    • Update readme and use pyroject.toml in setup by @clonker in https://github.com/deeptime-ml/deeptime/pull/197
    • Hotfix tramdataset by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/196
    • Finish progress callbacks by @clonker in https://github.com/deeptime-ml/deeptime/pull/198
    • Log space sample weights by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/199
    • Implied timescales plot by @clonker in https://github.com/deeptime-ml/deeptime/pull/200
    • Make sample weights normalized by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/201
    • Fix numpy deprecated warning by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/204
    • New Chapman-Kolmogorov test implementation by @clonker in https://github.com/deeptime-ml/deeptime/pull/202
    • Update validation loss on TAE by @clarktemple03 in https://github.com/deeptime-ml/deeptime/pull/205
    • Fix the calculation of the PMF. by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/206
    • Mbar initialization for tram by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/207
    • Minor changes by @clonker in https://github.com/deeptime-ml/deeptime/pull/208
    • fix TRAM bug by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/209

    New Contributors

    • @clarktemple03 made their first contribution in https://github.com/deeptime-ml/deeptime/pull/205

    Full Changelog: https://github.com/deeptime-ml/deeptime/compare/v0.4.0...v0.4.1

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jan 24, 2022)

    What's Changed

    • Catch2 unit tests infrastructure by @clonker in https://github.com/deeptime-ml/deeptime/pull/170
    • Compile time index and include dir hierarchy by @clonker in https://github.com/deeptime-ml/deeptime/pull/171
    • Use mamba over conda for CI by @clonker in https://github.com/deeptime-ml/deeptime/pull/172
    • Fix mapping states back to symbols by @clonker in https://github.com/deeptime-ml/deeptime/pull/174
    • Minor docs updates by @clonker in https://github.com/deeptime-ml/deeptime/pull/176
    • Some cleanup refactoring by @clonker in https://github.com/deeptime-ml/deeptime/pull/177
    • Fix state assignments in HMM submodels by @thempel in https://github.com/deeptime-ml/deeptime/pull/178
    • Flaky test support by @clonker in https://github.com/deeptime-ml/deeptime/pull/179
    • TRAM by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/168
    • Minor documentation fixes for TRAM by @clonker in https://github.com/deeptime-ml/deeptime/pull/181
    • TOC and a bugfix by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/182
    • Unify header location and fix gil scoped release issues by @clonker in https://github.com/deeptime-ml/deeptime/pull/184
    • TRAM tests by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/183
    • Update installation instructions by @clonker in https://github.com/deeptime-ml/deeptime/pull/186
    • TRAM model by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/185
    • Bump numpy from 1.19.3 to 1.21.0 in /tests by @dependabot in https://github.com/deeptime-ml/deeptime/pull/187
    • Allow KMeans with no iterations by @thempel in https://github.com/deeptime-ml/deeptime/pull/189
    • TRAM docs and a unit test by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/188
    • TRAM docs - last little things by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/190
    • TRAM Progress bar handling by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/191
    • Fix submodel_disconnect and add test by @thempel in https://github.com/deeptime-ml/deeptime/pull/193
    • Refactor by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/192
    • Trajectory fragments: split on negative state indices by @MaaikeG in https://github.com/deeptime-ml/deeptime/pull/194
    • Cluster progress by @clonker in https://github.com/deeptime-ml/deeptime/pull/195

    New Contributors

    • @MaaikeG made their first contribution in https://github.com/deeptime-ml/deeptime/pull/168
    • @dependabot made their first contribution in https://github.com/deeptime-ml/deeptime/pull/187

    Full Changelog: https://github.com/deeptime-ml/deeptime/compare/v0.3.1...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Nov 15, 2021)

    Makes deeptime Python 3.10 ready

    What's Changed

    • Nox by @clonker in https://github.com/deeptime-ml/deeptime/pull/164
    • fix matmul import for old torch versions by @thempel in https://github.com/deeptime-ml/deeptime/pull/165
    • make docs with nox by @clonker in https://github.com/deeptime-ml/deeptime/pull/166
    • enable python 3.10 in build matrix by @clonker in https://github.com/deeptime-ml/deeptime/pull/167

    Full Changelog: https://github.com/deeptime-ml/deeptime/compare/v0.3.0...v0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Nov 1, 2021)

  • v0.2.3(Jan 29, 2021)

  • v0.2.1(Oct 26, 2020)

  • v0.2(Oct 23, 2020)

This is a project based on retinaface face detection, including ghostnet and mobilenetv3

English | 简体中文 RetinaFace in PyTorch Chinese detailed blog:https://zhuanlan.zhihu.com/p/379730820 Face recognition with masks is still robust---------

pogg 59 Dec 21, 2022
GAT - Graph Attention Network (PyTorch) 💻 + graphs + 📣 = ❤️

GAT - Graph Attention Network (PyTorch) 💻 + graphs + 📣 = ❤️ This repo contains a PyTorch implementation of the original GAT paper ( 🔗 Veličković et

Aleksa Gordić 1.9k Jan 09, 2023
Header-only library for using Keras models in C++.

frugally-deep Use Keras models in C++ with ease Table of contents Introduction Usage Performance Requirements and Installation FAQ Introduction Would

Tobias Hermann 927 Jan 05, 2023
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。

YOLOP-opencv-dnn 使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,依然是包含C++和Python两种版本的程序实现 onnx文件从百度云盘下载,链接:https://pan.baidu.com/s/1A_9cldU

178 Jan 07, 2023
This is an official implementation for "PlaneRecNet".

PlaneRecNet This is an official implementation for PlaneRecNet: A multi-task convolutional neural network provides instance segmentation for piece-wis

yaxu 50 Nov 17, 2022
[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences

Garment4D [PDF] | [OpenReview] | [Project Page] Overview This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point

Fangzhou Hong 112 Dec 23, 2022
Datasets, tools, and benchmarks for representation learning of code.

The CodeSearchNet challenge has been concluded We would like to thank all participants for their submissions and we hope that this challenge provided

GitHub 1.8k Dec 25, 2022
This code finds bounding box of a single human mouth.

This code finds bounding box of a single human mouth. In comparison to other face segmentation methods, it is relatively insusceptible to open mouth conditions, e.g., yawning, surgical robots, etc. T

iThermAI 4 Nov 27, 2022
Source code for the paper: Variance-Aware Machine Translation Test Sets (NeurIPS 2021 Datasets and Benchmarks Track)

Variance-Aware-MT-Test-Sets Variance-Aware Machine Translation Test Sets License See LICENSE. We follow the data licensing plan as the same as the WMT

NLP2CT Lab, University of Macau 5 Dec 21, 2021
End-To-End Memory Network using Tensorflow

MemN2N Implementation of End-To-End Memory Networks with sklearn-like interface using Tensorflow. Tasks are from the bAbl dataset. Get Started git clo

Dominique Luna 339 Oct 27, 2022
Industrial knn-based anomaly detection for images. Visit streamlit link to check out the demo.

Industrial KNN-based Anomaly Detection ⭐ Now has streamlit support! ⭐ Run $ streamlit run streamlit_app.py This repo aims to reproduce the results of

aventau 102 Dec 26, 2022
Convert Table data to approximate values with GUI

Table_Editor Convert Table data to approximate values with GUIs... usage - Import methods for extension Tables. Imported method supposed to have only

CLJ 1 Jan 10, 2022
Code repository for our paper "Learning to Generate Scene Graph from Natural Language Supervision" in ICCV 2021

Scene Graph Generation from Natural Language Supervision This repository includes the Pytorch code for our paper "Learning to Generate Scene Graph fro

Yiwu Zhong 64 Dec 24, 2022
Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster] Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019

Zhen Li 539 Jan 06, 2023
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 191 Dec 23, 2022
Learning to Prompt for Continual Learning

Learning to Prompt for Continual Learning (L2P) Official Jax Implementation L2P is a novel continual learning technique which learns to dynamically pr

Google Research 207 Jan 06, 2023
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
Tensorflow implementation of Swin Transformer model.

Swin Transformer (Tensorflow) Tensorflow reimplementation of Swin Transformer model. Based on Official Pytorch implementation. Requirements tensorflow

167 Jan 08, 2023
Fortuitous Forgetting in Connectionist Networks

Fortuitous Forgetting in Connectionist Networks Introduction This repository includes reference code for the paper Fortuitous Forgetting in Connection

Hattie Zhou 14 Nov 26, 2022