Notebook and code to synthesize complex and highly dimensional datasets using Gretel APIs.

Related tags

Deep Learningtrainer
Overview

Gretel Trainer

This code is designed to help users successfully train synthetic models on complex datasets with high row and column counts. The code works by intelligently dividing a dataset into a set of smaller datasets of correlated columns that can be parallelized and then joined together.

Get Started

Running the notebook

  1. Launch the Notebook in Google Colab or your preferred environment.
  2. Add your dataset and Gretel API key to the notebook.
  3. Generate synthetic data!

NOTE: Either delete the existing or choose a new cache file name if you are starting a dataset run from scratch.

TODOs / Roadmap

  • Enable additional sampling from from trained models.
  • Detect and label encode random UIDs (preprocessing).
Comments
  • Benchmark route Amplify models through Trainer

    Benchmark route Amplify models through Trainer

    Top level change

    Now that Trainer has a GretelAmplify model, Benchmark uses Trainer for Amplify runs instead of the SDK.

    Refactor

    I refactored Benchmark's Gretel models and executors with the goal of centralizing and thus making it simpler to understand:

    • which model types use Trainer (opt-in) vs. use the SDK
    • the "compatibility requirements" for different models (currently: LSTM <= 150 columns, GPTX == 1 column)

    These had been spread across a few different places (compare.py determined Trainer/SDK, gretel/sdk.py had GPTX compatibility, gretel/trainer.py had LSTM compatibility), but now it can all be found in gretel/models.py.

    At first glance it would seem compatibility requirements could be defined on specific model subclasses to make things more polymorphic. However, Benchmark's Gretel model classes are really just friendly wrappers around specific model configurations (from the blueprints repo) and do not represent all possible instances of that model type running through Benchmark. Instead, we instruct users subclass the generic GretelModel base class when they want to provide their own specific Gretel configuration. There are two reasons for this:

    1. It's a simpler instruction (always subclass this one thing)
    2. It enables us to include model types that are not yet "first class supported," such as DGAN (which we can't support in the same way we do models like Amplify/LSTM/etc. because DGAN's config includes required fields that are specifically coupled to the data source—there is no "one size fits all" blueprint).

    Small fixes

    • fix the model_slug value for Trainer's GretelACTGAN model
      • :warning: should this be changed to a list ["actgan", "ctgan"] for a little while for a smoother transition/deprecation experience??
    • zero-index custom model runs' run-identifier to match gretel model runs (which were themselves fixed to match project names here)
    opened by mikeknep 2
  • Lift gretel model compatibility to separate module

    Lift gretel model compatibility to separate module

    What's here

    Make it easier to find the "compatibility rules" for models by lifting the logic to its own module.

    Why not add this logic to the specific model classes? Wouldn't that be more polymorphic?

    The model classes (GretelLSTM, GretelCTGAN, etc.) are wrappers around specific configurations from the blueprints repo. They do not represent every possible configuration of that model type. If a user wants to run a customized LSTM config, for example, they subclass GretelModel, not GretelLSTM:

    class MyLstm(GretelModel):
        config = "/path/to/my_lstm.yml"
    

    Note: they could subclass GretelLSTM, but 1) it's easier to tell people to just subclass GretelModel regardless of model type, and/because 2) this ultimately treats the model configuration as the source of truth.

    If someone mistakenly created a custom Gretel model like this...

    class MyGptX(GretelGPTX):
        config = "/path/to/my_amplify.yml"
    

    ...Benchmark will treat this as an Amplify model, because basically all it does with the class instance is grab the config attribute (and the name—the results output will show the name as MyGptX.)

    opened by mikeknep 1
  • Lr/artifact manifest

    Lr/artifact manifest

    Added logic for config selection and updated dictionary key to access manifest per latest internal changes.

    Note that high-dimensionality-high-record is non-existent at the moment, as is the manifest endpoint :)

    Items yet to be addressed:

    • turn off partitions for non-LSTM models
    opened by lipikaramaswamy 1
  • Add param to pass custom base configuration

    Add param to pass custom base configuration

    • Prefer config if present, otherwise use the model_type's default config.
    • This does open the door a little wider to setting an invalid config that won't be known to be bad until attempting to train. That door was already slightly ajar in that one could use model_params to set keys to invalid values.
    • Not included here, but a thought: we could validate model_type earlier (even as the very first step of __init__) to fail fast, specifically before even creating a project.
    opened by mikeknep 1
  • Remove no-op elif case from runner

    Remove no-op elif case from runner

    Particularly given that we now have a third model (Amplify) supported in Trainer, we can remove this no-op elif clause so that the runner only has special logic for / awareness of LSTM (expand up in the diff for context).

    opened by mikeknep 0
  • Switch CTGAN usages to ACTGAN.

    Switch CTGAN usages to ACTGAN.

    ACTGAN is the successor of CTGAN.

    Note (1): this change is backward compatible, as all of the parameters that CTGAN supported are supported by ACTGAN as well.

    Note (2): any previously trained CTGAN models will be still usable, i.e. it will be possible to generate new records using old CTGAN models.

    opened by pimlock 0
  • Fix off-by-one difference between project name and run ID

    Fix off-by-one difference between project name and run ID

    Quick fix so that benchmark's internal run identifier lines up with the project name in Gretel Cloud. We'll eventually have a more user-friendly and stable interface to access detailed run information, but until we figure out how exactly we want that to look and do it, this should make things a little more friendly for those willing to dive into the internals: the models from project benchmark-{timestamp}-3 will correspond to comparison.results_dict["gretel-3"] (instead of "gretel-4")

    Note: I considered just using the full project name as the identifier instead of gretel-{index}, but we don't have an equivalent to project names for user custom model runs, so I figure the current [gretel|custom]-{index} approach is still best for now.

    opened by mikeknep 0
  • Configure session before starting Benchmark comparison

    Configure session before starting Benchmark comparison

    Current behavior

    When running in an environment where no Gretel credentials can be found (e.g. Colab), when Benchmark kicks off a comparison the background threads instantiating Trainer instances will prompt for an API key. This is problematic for multiple reasons, all (I believe) due to it running in multiple background threads: it prompts multiple times, doesn't accept input and/or cache properly, and ultimately crashes.

    This fix

    Benchmark itself now checks for a configured session before kicking off any real work. It prompts (api_key="prompt") if no credentials are found, validates (validate=True) the supplied API key, and caches (cache="yes") it for all the runs it manages. The configure_session calls that happen when instantiating Trainer effectively "pass through." I've tested this by installing trainer from this branch in Colab and it is now working as expected.

    opened by mikeknep 0
  • Include dataset name in trainer uploads.

    Include dataset name in trainer uploads.

    Add original file name to data sources uploaded as part of trainer projects. This helps disambiguate the data sources from multiple trainer runs where previously they were always named trainer_0.csv, trainer_1.csv, etc.

    Also fixes StrategyRunner to not silently swallow all ApiExceptions when submitting a job, so errors not associated with max job limit are still thrown and surfaced to the user.

    opened by kboyd 0
  • Auto-determine best model from training data

    Auto-determine best model from training data

    Rather than create a GretelAuto model class that would need to override or work around several _BaseConfig details (validation, max/limit values, etc.), my goal here is to establish the convention that model type is optional and if you don't specify one when instantiating the Trainer, you're OK with us choosing for you. This is a change from the current behavior (optional but default to LSTM). In this case, we defer setting the trainer instance's self.model_type until such time as we can determine the best model to use: namely, at train time when a dataset has been provided.

    I'm a little unclear on the load (from cache) workflow, which in this branch's implementation would set the StrategyRunner's model_config to None. I think this is OK because the only methods referencing that value are part of training (train_all_partitions => train_next_partition => train_partition), and that workflow is only kicked off by the Trainer's train method, which will load in data and use it to determine and set a concrete model.

    I've also added an optional delimiter parameter to train to help support files with non-comma delimiters.

    opened by mikeknep 0
  • Get average sqs score from across partitions

    Get average sqs score from across partitions

    A few ways we could slice and dice this; I figure there may be additional SQS info we want from the run in the future so I decided to expose the entire List[dict] from the runner, and let the trainer pluck out and calculate the first such aggregate, user-friendly data. I'm open to pushing more of this down to the runner and/or transforming the SQS dictionaries into first-class types (likely dataclasses) if anyone has a strong opinion or thinks it'd be useful.

    opened by mikeknep 0
  • Use artifact manifest for determine_best_model.

    Use artifact manifest for determine_best_model.

    Not fully tested. Waiting for new backend API to be available.

    Should revisit retry logic if we can reliably distinguish between a pending manifest (still being generated) and some other error. Or if retrying is included in the gretel_client interface.

    opened by kboyd 1
Releases(v0.5.0)
  • v0.5.0(Nov 18, 2022)

    What's Changed

    • GretelCTGAN has been completely removed, fully replaced by its successor, GretelACTGAN
    • GretelACTGAN uses the new tabular-actgan config by default
    • Benchmark now routes Amplify models through Trainer rather than the SDK
    • Bug fix: helper to properly configure Gretel session before starting Benchmark comparison when unset
    • Bug fix: zero-index Benchmark run ID (internal) to fix off-by-one difference with project name

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.1...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Nov 2, 2022)

    What's Changed

    • Add pip install command and Colab disclaimer to Benchmark notebook by @mikeknep in https://github.com/gretelai/trainer/pull/22
    • Include dataset name in trainer uploads. by @kboyd in https://github.com/gretelai/trainer/pull/21
    • Docs improvements by @MasonEgger (https://github.com/gretelai/trainer/pull/23 https://github.com/gretelai/trainer/pull/24 https://github.com/gretelai/trainer/pull/28 https://github.com/gretelai/trainer/pull/26)
    • Add support for Gretel Amplify by @pimlock in https://github.com/gretelai/trainer/pull/29

    New Contributors

    • @kboyd made their first contribution in https://github.com/gretelai/trainer/pull/21
    • @MasonEgger made their first contribution in https://github.com/gretelai/trainer/pull/23
    • @pimlock made their first contribution in https://github.com/gretelai/trainer/pull/29

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.0...v0.4.1

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 6, 2022)

    What's Changed

    • Initial release of new Benchmark module :rocket: by @mikeknep in https://github.com/gretelai/trainer/pull/19
    • Create simple-conditional-generation.ipynb :notebook: by @zredlined in https://github.com/gretelai/trainer/pull/18

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.3.0...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Aug 30, 2022)

  • v0.2.3(Aug 24, 2022)

    What's Changed

    • The trainer now chooses the best model configuration based on input training data when model_type is not specified in advance at Trainer instantiation (previously defaulted to GretelLSTM)
    • train accepts an optional delimiter argument (defaults to comma when unspecified)
    • Input training data is divided more equally across row partitions
    • LSTM models generate a consistent number of records (5000) during data training (previously matched size of input training data)
    • Fixed trainer generate to synthesize the correct number of records when multiple row partitions are used
    • Fixed trainer get_sqs_score method

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.2...v0.2.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Aug 11, 2022)

    What's Changed

    • Update default model config by @zredlined in https://github.com/gretelai/trainer/pull/10
    • Remove project delete instruction by @drew in https://github.com/gretelai/trainer/pull/11
    • CTGAN and conditional data generation by @zredlined in https://github.com/gretelai/trainer/pull/12
    • Get average sqs score from across partitions by @mikeknep in https://github.com/gretelai/trainer/pull/14

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.1...v0.2.2

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 16, 2022)

  • v0.2.0(Jun 10, 2022)

  • v0.1.0(Jun 10, 2022)

Owner
Gretel.ai
Gretel.ai Open Source Projects and Tools
Gretel.ai
3rd place solution for the Weather4cast 2021 Stage 1 Challenge

weather4cast2021_Stage1 3rd place solution for the Weather4cast 2021 Stage 1 Challenge Dependencies The code can be executed from a fresh environment

5 Aug 14, 2022
Evolving neural network parameters in JAX.

Evolving Neural Networks in JAX This repository holds code displaying techniques for applying evolutionary network training strategies in JAX. Each sc

Trevor Thackston 6 Feb 12, 2022
Code for "Unsupervised Source Separation via Bayesian inference in the latent domain"

LQVAE-separation Code for "Unsupervised Source Separation via Bayesian inference in the latent domain" Paper Samples GT Compressed Separated Drums GT

Michele Mancusi 30 Oct 25, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

551 Dec 29, 2022
Talk covering the features of skorch

Skorch Talk Skorch - A Union of Scikit-learn and PyTorch Presentation The slides can be downloaded at: download link. Google Colab Part One - MNIST Pa

Thomas J. Fan 3 Oct 20, 2020
The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation"

SD-AANet The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation" [arxiv] Overview confi

cv516Buaa 9 Nov 07, 2022
Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21

MonoFlex Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21. Work in progress. Installation This repo is tested w

Yunpeng 169 Dec 06, 2022
Tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.3) is tested on anaconda3, with PyTorch 1.8.1 / torchvision 0

Tzu-Wei Huang 7.5k Dec 28, 2022
Message Passing on Cell Complexes

CW Networks This repository contains the code used for the papers Weisfeiler and Lehman Go Cellular: CW Networks (Under review) and Weisfeiler and Leh

Twitter Research 108 Jan 05, 2023
Code for Temporally Abstract Partial Models

Code for Temporally Abstract Partial Models Accompanies the code for the experimental section of the paper: Temporally Abstract Partial Models, Khetar

DeepMind 19 Jul 13, 2022
TResNet: High Performance GPU-Dedicated Architecture

TResNet: High Performance GPU-Dedicated Architecture paperV2 | pretrained models Official PyTorch Implementation Tal Ridnik, Hussam Lawen, Asaf Noy, I

426 Dec 28, 2022
AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition

AdaFocusV2 This repo contains the official code and pre-trained models for AdaFo

79 Dec 26, 2022
CountDown to New Year and shoot fireworks

CountDown and Shoot Fireworks About App This is an small application make you re

5 Dec 31, 2022
Network Compression via Central Filter

Network Compression via Central Filter Environments The code has been tested in the following environments: Python 3.8 PyTorch 1.8.1 cuda 10.2 torchsu

2 May 12, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Character Controllers using Motion VAEs

Character Controllers using Motion VAEs This repo is the codebase for the SIGGRAPH 2020 paper with the title above. Please find the paper and demo at

Electronic Arts 165 Jan 03, 2023
Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution.

convolver Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution. Created by Sean Higley

Sean Higley 1 Feb 23, 2022
Using deep learning model to detect breast cancer.

Breast-Cancer-Detection Breast cancer is the most frequent cancer among women, with around one in every 19 women at risk. The number of cases of breas

1 Feb 13, 2022
Dynamic Attentive Graph Learning for Image Restoration, ICCV2021 [PyTorch Code]

Dynamic Attentive Graph Learning for Image Restoration This repository is for GATIR introduced in the following paper: Chong Mou, Jian Zhang, Zhuoyuan

Jian Zhang 84 Dec 09, 2022
Advanced Signal Processing Notebooks and Tutorials

Advanced Digital Signal Processing Notebooks and Tutorials Prof. Dr. -Ing. Gerald Schuller Jupyter Notebooks and Videos: Renato Profeta Applied Media

Guitars.AI 115 Dec 13, 2022