A deep learning framework for historical document image analysis

Related tags

Deep LearningDIVA-DAF
Overview

DIVA-DAF

PyTorch Lightning Config: Hydra Template
tests codecov

Description

A deep learning framework for historical document image analysis.

How to run

Install dependencies

# clone project
git clone https://github.com/DIVA-DIA/unsupervised_learning.git
cd unsupervised_learing

# create conda environment (IMPORTANT: needs Python 3.8+)
conda env create -f conda_env_gpu.yaml

# activate the environment using .autoenv
source .autoenv

# install requirements
pip install -r requirements.txt

Train model with default configuration. Care: you need to change the value of data_dir in config/datamodule/cb55_10_cropped_datamodule.yaml.

# default run based on config/config.yaml
python run.py

# train on CPU
python run.py trainer.gpus=0

# train on GPU
python run.py trainer.gpus=1

Train using GPU

# [default] train on all available GPUs
python run.py trainer.gpus=-1

# train on one GPU
python run.py trainer.gpus=1

# train on two GPUs
python run.py trainer.gpus=2

# train on CPU
python run.py trainer.accelerator=ddp_cpu

Train using CPU for debugging

# train on CPU
python run.py trainer.accelerator=ddp_cpu trainer.precision=32

Train model with chosen experiment configuration from configs/experiment/

python run.py +experiment=experiment_name

You can override any parameter from command line like this

python run.py trainer.max_epochs=20 datamodule.batch_size=64

Setup PyCharm

  1. Fork this repo
  2. Clone the repo to your local filesystem (git clone CLONELINK)
  3. Clone the repo onto your remote machine
  4. Move into the folder on your remote machine and create the conda environment (conda env create -f conda_env_gpu.yaml)
  5. Run source .autoenv in the root folder on your remote machine (activates the environment)
  6. Open the folder in PyCharm (File -> open)
  7. Add the interpreter (Preferences -> Project -> Python interpreter -> top left gear icon -> add... -> SSH Interpreter) follow the instructions (set the correct mapping to enable deployment)
  8. Upload the files (deployment)
  9. Create a wandb account (wandb.ai)
  10. Log via ssh onto your remote machine
  11. Go to the root folder of the framework and activate the environment (source .autoenv OR conda activate unsupervised_learning)
  12. Log into wandb. Execute wandb login and follow the instructions
  13. Now you should be able to run the basic experiment from PyCharm

Loading models

You can load the different model parts backbone or header as well as the whole task. To load the backbone or the header you need to add to your experiment config the field path_to_weights. e.g.

model:
    header:
        path_to_weights: /my/path/to/the/pth/file

To load the whole task you need to provide the path to the whole task to the trainer. This is with the field resume_from_checkpoint. e.g.

trainer:
    resume_from_checkpoint: /path/to/.ckpt/file

Freezing model parts

You can freeze both parts of the model (backbone or header) with the freeze flag in the config. E.g. you want to freeze the backbone: In the command line:

python run.py +model.backbone.freeze=True

In the config (e.g. model/backbone/baby_unet_model.yaml):

...
freeze: True
...

CARE: You can not train a model when you do not have trainable parameters (e.g. freezing backbone and header).

Selection in datasets

If you use the selection key you can either use an int, which takes the first n files, or a list of strings to filter the different datasets. In the case you are using a full-page dataset be aware that the selection list is a list of file names without the extension.

Cite us

@misc{vögtlin2022divadaf,
      title={DIVA-DAF: A Deep Learning Framework for Historical Document Image Analysis}, 
      author={Lars Vögtlin and Paul Maergner and Rolf Ingold},
      year={2022},
      eprint={2201.08295},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • Not working with ddp_cpu

    Not working with ddp_cpu

    Describe the bug If we want to run the framework with ddp_cpu as accelerator it wont work as it has a working directory problem.

    To Reproduce python run.py trainer.accelerator='ddp_cpu' trainer.precision=32

    Expected behavior We can use ddp_cpu to debug our system

    Additional context To avoid this problem at the moment we can just use the full path to the run.py file ($PWD/run.py).

    Checklist

    • [ ] Add a warning if ddp_cpu and not presicion=32
    bug If time Pipeline 
    opened by lvoegtlin 3
  • Use deepspeed to speed up the training

    Use deepspeed to speed up the training

    Is your feature request related to a problem? Please describe. To accelerate the training we could use the deepspeed plugin

    Describe the solution you'd like Make it possible to activate deepspeed through the config

    Checklist

    • [x] Test deepspeed
    • [ ] Include it into the config system
    wontfix If time Pipeline 
    opened by lvoegtlin 3
  • Load model checkpoint instead of default init

    Load model checkpoint instead of default init

    differ between train test and train and test Already started with two parameters train and test to define what part of the process should be done. need to include loading from ckpt for fine-tuning or just testing

    https://pytorch-lightning.readthedocs.io/en/stable/common/weights_loading.html

    PXL_20210706_154513642

    Evtl. weights_only would work

    We need to make our own callback which inherits from ModelCheckpoint and override/add the just model checkpoint save (https://github.com/PyTorchLightning/pytorch-lightning/blob/bca5adf6de1ae74c7103839aac54c8648464bee6/pytorch_lightning/callbacks/model_checkpoint.py#L485)

    Checklist

    • [x] test check if path_to_weights is set
    • [x] load model state from path
    • [x] create a generic model which takes an encoder and a header (configs)
    • [x] #15
    • [x] save model with a callback (create callback)
    • [x] if we are just testing we need a path_to_weights for both
    Important Module Pipeline 
    opened by lvoegtlin 3
  • Updating dependecies

    Updating dependecies

    Description

    Updating PL, torchmetrics and pytest to the newest version. Also introduces codecoverage with sonarcloud. Each PR will now be tested on testcoverage

    How to Test/Run?

    pytest

    opened by lvoegtlin 2
  • Fixed problem with multiple empty folders in checkpoints

    Fixed problem with multiple empty folders in checkpoints

    Description

    The checkpoint callback created the checkpoints in a dedicated epochs folder. The folder should get deleted if it's no longer the best. This did not also work with the built-in version of the model checkpoint callback. Solved it by doing a clean-up at the end of the experiment.

    How to Test/Run?

    python run.py trainer.max_epochs=20

    Something missing?

    opened by lvoegtlin 2
  • Feature/datamodule for gif imgs

    Feature/datamodule for gif imgs

    Description

    A datamodule that takes advantage of the index format. It no longer determines the classes by the color but takes the classes directly form the raw image and uses the palette as class encoding.

    How to Test/Run?

    pytest or python run.py experiment=development_baby_unet_indexed.yaml

    opened by lvoegtlin 2
  • DDP metric bias

    DDP metric bias

    Is your feature request related to a problem? Please describe. When running an experiment with DDP we have a little data bias if the dataset is not dividable by batch_size * num_processors. To make the users aware of this problem we can add a warning if num_samples % (batch_size * num_processors) != 0. Problem described here

    Describe the solution you'd like Raining an error if the condition from above is not met. Also, add a flag to ignore this error (ignore_ddp_bias)

    Describe alternatives you've considered Solve it with the ddp join function from PyTorch but it is very hard to hack that into pl.

    Checklist

    • [x] Create check and warning
    • [x] Add shuffle and drop_last_batch options to datamodule config
    • [x] Add shuffle/drop_last_batch to default config files
    enhancement Pipeline 
    opened by lvoegtlin 2
  • Add the strict parameter to make it possible to load non-fitting models

    Add the strict parameter to make it possible to load non-fitting models

    Describe the feature

    Make it possible to transfer weights between similar models

    Describe the solution you'd like

    A parameter strict in the models which defines the way to load if it is not fitting the weights file

    Checklist

    • [x] Add this parameter in the model config
    • [x] Use it to load the model
    • [x] Add log for missed/unexpected keys
    If time Module Pipeline 
    opened by lvoegtlin 2
  • Loss function as config

    Loss function as config

    Is your feature request related to a problem? Please describe. Make it possible to define the loss function in the config.

    Describe the solution you'd like Define some defaults functions and create a config for them. Then hand over the criterion object to the task at the beginning of the training.

    Checklist

    • [ ] define 4 basic losses (Xentropy, L1, MSE, BCE)
    • [ ] create configs
    • [ ] hand over the loss function as a parameter to the task
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Specify metric via callback

    Specify metric via callback

    Is your feature request related to a problem? Please describe. To make the system more flexible we have to implement the metrics with callbacks s.t. we can combine multiple metrics and also reuse them in other tasks.

    Describe the solution you'd like Implement mIoU (jar fashion), precision, recall, and accuracy as metric callbacks. Call metrics at the end of the steps (see) Also make sure that when we are testing and in ddp that we just run it on one gpu or with join (documentation of join) (look here or here)

    Checklist

    • [x] Implement DIVA HisDB metric class (our metric)
    • [x] Metric which is exactly like the jar
    • [x] Create config for mIoU
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Feature/add fcn

    Feature/add fcn

    Description

    UNet now has a swappable classifier. This makes working with it way easier, as we can easily fine-tune it onto a dataset with more or less classes.

    How to Test/Run?

    pytest or python run.py

    opened by lvoegtlin 1
  • Training/validation and test time

    Training/validation and test time

    Is your feature request related to a problem? Please describe. Get the exact time for the training (incl. validation) and the testing in seconds. This can be reported overall as well as for an epoch. The setup time of the framework should be excluded.

    Describe the solution you'd like Log these times into the used loggers and report it in the experiment summary file.

    Checklist

    • [ ] Check if PL already provides such a feature
    • [ ] Create timers for the different phases
    • [ ] Report these times
    • [ ] Test
    • [ ] PR
    opened by lvoegtlin 1
  • More complex return

    More complex return

    Is your feature request related to a problem? Please describe. Let the framework return more information, like beast model path, metric, etc. as a dictionary, s.t. calling files can chain together multiple frameworks runs.

    Describe the solution you'd like With a dictionary

    Checklist

    • [ ] Check what return information are needed
    • [ ] Add it tot he execution class
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Rework the backbone header model

    Rework the backbone header model

    Is your feature request related to a problem? Please describe. Think about the current Backboneheader model and try to adapt it to the new needs. Eventually, changes it to a new model.

    Checklist

    • [ ] Evaluate the existing model with the new needs
    • [ ] Think about solutions
    • [ ] Prototype the solutions
    • [ ] Implementation (models, workflow, callbacks)
    • [ ] Config adaption
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Test if possible conf_mat from base_task into a callback

    Test if possible conf_mat from base_task into a callback

    Is your feature request related to a problem? Please describe. The problem before with the conf mat callback was that it had a semaphore leak. As described here (https://github.com/ashleve/lightning-hydra-template/issues/189#issuecomment-1003532448), it should work now with the usage of torchmetrics.

    Checklist

    • [ ] Factor the conf mat log into callback
    • [ ] Extensice testing
    • [ ] Tests
    • [ ] PR
    enhancement Config 
    opened by lvoegtlin 0
  • Update hydra to 1.2

    Update hydra to 1.2

    Is your feature request related to a problem? Please describe. Update hydra to the newest version

    Checklist

    • [ ] update
    • [ ] adapt code
    • [ ] test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
  • Hyperparameter optimization

    Hyperparameter optimization

    Is your feature request related to a problem? Please describe. Create a possibility to do hyperparameter optimization with the framework

    Checklist

    • [ ] Check out which one works best
    • [ ] integrate it or use it as a script
    • [ ] Test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
Releases(version_0.2.2)
  • version_0.2.2(Jun 24, 2022)

    What's Changed

    • Experiment for rotnet with unet backbone by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/101
    • Created additional tests by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/100
    • Updated the version on PL to 1.5.10 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/112
    • Added tests for RolfFormat datamodule and RGB takes by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/114
    • Release 0.2.2 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/113

    Full Changelog: https://github.com/DIVA-DIA/DIVA-DAF/compare/version_0.2.1...version_0.2.2

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.1(Dec 2, 2021)

    What's Changed

    • Fixed selection parameter, removed todos, improved print_config, added self to configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/87
    • Added tests for tasks and fixed merge scripts by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/89
    • New log folder structure by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/91
    • Replacing numpy with torch in divahisdb functional by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/93
    • Rename config saved during a run, and print commands to rerun a run by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/95
    • Release 0.2.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/98

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.2.0...version_0.2.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.0(Nov 25, 2021)

    Some new things

    • new architectures (resnet)
    • new datamodules (rolf format, RGB, full-page, and SSL)
    • different bug fixes
    • experiment configs
    • refactoring and deletion of unused code
    • callback to check the compatibility of backbone and header
    • inference/prediction stage (list of files with regex)
    • freezing header or backbone
    • improved readme
    • improved testing

    What's Changed

    • Dev data refactoring by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/74
    • Dev rgb encoding by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/76
    • RotNet by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/75
    • log more by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/77
    • More architectures by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/78
    • Dev fixing tests by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/79
    • Created resnet FCN header by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/83
    • Dev rolf data format by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/84
    • Introduce inference/prediction and refactoring by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/85
    • release 0.2.0 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/86

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.1...version_0.2.0

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.1(Oct 22, 2021)

    Changelog:

    • fixed conf mat
    • optimized test and validation step
    • improved merging of crops
    • more metrics and optimizers
    • updated requirements

    What's Changed

    • made tests running also in the terminal by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/60
    • fixed evaluation tool problem by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/62
    • adding new optimiser configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/64
    • removed unused dependency by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/65
    • Dev improve datamodule tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/66
    • Dev fixing conf and f1 heatmap by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/68
    • :art: each worker of the dl gets now an own seed by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/69
    • Dev reduce gpu memory by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/71
    • upload run config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/72
    • release version 0.1.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/73

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.0...version_0.1.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.0(Oct 6, 2021)

    The first version of the framework

    What's Changed

    • Dev 38 create hydra configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/1
    • Dev 47 better logger name by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/3
    • Dev 43 configurable optimizers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/2
    • Dev 44 load model checkpoint by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/16
    • dev synced metric logging by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/17
    • When DDP num_workers = 0 was forced by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/19
    • Resolve ddp warning by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/20
    • Add strict parameter by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/21
    • Config refinement by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/23
    • Save config file for each run by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/28
    • add env by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/29
    • Dev 25 torchmetric introduction by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/30
    • Removed custom hydra config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/32
    • Dev 24 abstract task class by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/33
    • Dev 26 loading warning improvements by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/34
    • update pl to 1.4.4 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/36
    • Loss functions as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/37
    • ddp cpu not working by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/39
    • Dev shuffle data option by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/44
    • Dev dataset selected pages by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/49
    • Dev 9 metric as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/47
    • Fix conf mat and extend by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/51
    • Save metrics to csv by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/52
    • Check backbone header compatibility by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/53
    • abstract datamodule and resolvers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/56
    • Dev refactoring and tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/57
    • Dev 34 refactoring semantic segmentation by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/58
    • Version 0.1.0 of the fw by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/59

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/commits/version_0.1.0

    Source code(tar.gz)
    Source code(zip)
Code for the paper: Fighting Fake News: Image Splice Detection via Learned Self-Consistency

Fighting Fake News: Image Splice Detection via Learned Self-Consistency [paper] [website] Minyoung Huh *12, Andrew Liu *1, Andrew Owens1, Alexei A. Ef

minyoung huh (jacob) 174 Dec 09, 2022
RealTime Emotion Recognizer for Machine Learning Study Jam's demo

Emotion recognizer Table of contents Clone project Dataset Install dependencies Main program Demo 1. Clone project git clone https://github.com/GDSC20

Google Developer Student Club - UIT 1 Oct 05, 2021
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

54 Dec 06, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 01, 2022
This git repo contains the implementation of my ML project on Heart Disease Prediction

Introduction This git repo contains the implementation of my ML project on Heart Disease Prediction. This is a real-world machine learning model/proje

Aryan Dutta 1 Feb 02, 2022
A tool for making map images from OpenTTD save games

OpenTTD Surveyor A tool for making map images from OpenTTD save games. This is not part of the main OpenTTD codebase, nor is it ever intended to be pa

Aidan Randle-Conde 9 Feb 15, 2022
Anti-UAV base on PaddleDetection

Paddle-Anti-UAV Anti-UAV base on PaddleDetection Background UAVs are very popular and we can see them in many public spaces, such as parks and playgro

Qingzhong Wang 2 Apr 20, 2022
Hl classification bc - A Network-Based High-Level Data Classification Algorithm Using Betweenness Centrality

A Network-Based High-Level Data Classification Algorithm Using Betweenness Centr

Esteban Vilca 3 Dec 01, 2022
Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images

SASSnet Code for paper: Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images(MICCAI 2020) Our code is origin from UA-MT You can fin

klein 125 Jan 03, 2023
NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem

NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem Liang Xin, Wen Song, Zhiguang

xinliangedu 33 Dec 27, 2022
GUI for a Vocal Remover that uses Deep Neural Networks.

GUI for a Vocal Remover that uses Deep Neural Networks.

4.4k Jan 07, 2023
Spectralformer: Rethinking hyperspectral image classification with transformers

The code in this toolbox implements the "Spectralformer: Rethinking hyperspectral image classification with transformers". More specifically, it is detailed as follow.

Danfeng Hong 104 Jan 04, 2023
Multi-Horizon-Forecasting-for-Limit-Order-Books

Multi-Horizon-Forecasting-for-Limit-Order-Books This jupyter notebook is used to demonstrate our work, Multi-Horizon Forecasting for Limit Order Books

Zihao Zhang 116 Dec 23, 2022
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

Alibaba 185 Dec 21, 2022
This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing.

Feedback Prize - Evaluating Student Writing This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing. The

Udbhav Bamba 41 Dec 14, 2022
A Deep Reinforcement Learning Framework for Stock Market Trading

DQN-Trading This is a framework based on deep reinforcement learning for stock market trading. This project is the implementation code for the two pap

61 Jan 01, 2023
Code for CVPR2021 paper "Robust Reflection Removal with Reflection-free Flash-only Cues"

Robust Reflection Removal with Reflection-free Flash-only Cues (RFC) Paper | To be released: Project Page | Video | Data Tensorflow implementation for

Chenyang LEI 162 Jan 05, 2023
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

Jingkang Wang 12 Nov 23, 2022
A framework for using LSTMs to detect anomalies in multivariate time series data. Includes spacecraft anomaly data and experiments from the Mars Science Laboratory and SMAP missions.

Telemanom (v2.0) v2.0 updates: Vectorized operations via numpy Object-oriented restructure, improved organization Merge branches into single branch fo

Kyle Hundman 844 Dec 28, 2022