Traingenerator πŸ§™ A web app to generate template code for machine learning ✨

Overview

Traingenerator

πŸ§™   A web app to generate template code for machine learning ✨

Gitter Heroku Code style: black



πŸŽ‰ Traingenerator is now live! πŸŽ‰

Try it out:
https://traingenerator.jrieke.com


Generate custom template code for PyTorch & sklearn, using a simple web UI built with streamlit. Traingenerator offers multiple options for preprocessing, model setup, training, and visualization (using Tensorboard or comet.ml). It exports to .py, Jupyter Notebook, or Google Colab. The perfect tool to jumpstart your next machine learning project!


For updates, follow me on Twitter, and if you like this project, please consider sponsoring ☺




Adding new templates

You can add your own template in 4 easy steps (see below), without changing any code in the app itself. Your new template will be automatically discovered by Traingenerator and shown in the sidebar. That's it! 🎈

Want to share your magic? πŸ§™ PRs are welcome! Please have a look at CONTRIBUTING.md and write on Gitter.

Some ideas for new templates: Keras/Tensorflow, Pytorch Lightning, object detection, segmentation, text classification, ...

  1. Create a folder under ./templates. The folder name should be the task that your template solves (e.g. Image classification). Optionally, you can add a framework name (e.g. Image classification_PyTorch). Both names are automatically shown in the first two dropdowns in the sidebar (see image). ✨ Tip: Copy the example template to get started more quickly.
  2. Add a file sidebar.py to the folder (see example). It needs to contain a method show(), which displays all template-specific streamlit components in the sidebar (i.e. everything below Task) and returns a dictionary of user inputs.
  3. Add a file code-template.py.jinja to the folder (see example). This Jinja2 template is used to generate the code. You can write normal Python code in it and modify it (through Jinja) based on the user inputs in the sidebar (e.g. insert a parameter value from the sidebar or show different code parts based on the user's selection).
  4. Optional: Add a file test-inputs.yml to the folder (see example). This simple YAML file should define a few possible user inputs that can be used for testing. If you run pytest (see below), it will automatically pick up this file, render the code template with its values, and check that the generated code runs without errors. This file is optional – but it's required if you want to contribute your template to this repo.

Installation

Note: You only need to install Traingenerator if you want to contribute or run it locally. If you just want to use it, go here.

git clone https://github.com/jrieke/traingenerator.git
cd traingenerator
pip install -r requirements.txt

Optional: For the "Open in Colab" button to work you need to set up a Github repo where the notebook files can be stored (Colab can only open public files if they are on Github). After setting up the repo, create a file .env with content:

GITHUB_TOKEN=<your-github-access-token>
REPO_NAME=<user/notebooks-repo>

If you don't set this up, the app will still work but the "Open in Colab" button will only show an error message.

Running locally

streamlit run app/main.py

Make sure to run always from the traingenerator dir (not from the app dir), otherwise the app will not be able to find the templates.

Deploying to Heroku

First, install heroku and login. To create a new deployment, run inside traingenerator:

heroku create
git push heroku main
heroku open

To update the deployed app, commit your changes and run:

git push heroku main

Optional: If you set up a Github repo to enable the "Open in Colab" button (see above), you also need to run:

heroku config:set GITHUB_TOKEN=
   
    
heroku config:set REPO_NAME=
    

    
   

Testing

First, install pytest and required plugins via:

pip install -r requirements-dev.txt

To run all tests:

pytest ./tests

Note that this only tests the code templates (i.e. it renders them with different input values and makes sure that the code executes without error). The streamlit app itself is not tested at the moment.

You can also test an individual template by passing the name of the template dir to --template, e.g.:

pytest ./tests --template "Image classification_scikit-learn"

The mage image used in Traingenerator is from Twitter's Twemoji library and released under Creative Commons Attribution 4.0 International Public License.

Owner
Johannes Rieke
Product manager dev experience @streamlit
Johannes Rieke
MLflow App Using React, Hooks, RabbitMQ, FastAPI Server, Celery, Microservices

Katana ML Skipper This is a simple and flexible ML workflow engine. It helps to orchestrate events across a set of microservices and create executable

Tom Xu 8 Nov 17, 2022
This machine-learning algorithm takes in data from the last 60 days and tries to predict tomorrow's price of any crypto you ask it.

Crypto-Currency-Predictor This machine-learning algorithm takes in data from the last 60 days and tries to predict tomorrow's price of any crypto you

Hazim Arafa 6 Dec 04, 2022
Add built-in support for quaternions to numpy

Quaternions in numpy This Python module adds a quaternion dtype to NumPy. The code was originally based on code by Martin Ling (which he wrote with he

Mike Boyle 531 Dec 28, 2022
PyTorch extensions for high performance and large scale training.

Description FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library ext

Facebook Research 2k Dec 28, 2022
Spark development environment for k8s

Local Spark Dev Env with Docker Development environment for k8s. Using the spark-operator image to ensure it will be the same environment. Start conta

Otacilio Filho 18 Jan 04, 2022
A library to generate synthetic time series data by easy-to-use factors and generator

timeseries-generator This repository consists of a python packages that generates synthetic time series dataset in a generic way (under /timeseries_ge

Nike Inc. 87 Dec 20, 2022
Class-imbalanced / Long-tailed ensemble learning in Python. Modular, flexible, and extensible

IMBENS: Class-imbalanced Ensemble Learning in Python Language: English | Chinese/δΈ­ζ–‡ Links: Documentation | Gallery | PyPI | Changelog | Source | Downl

Zhining Liu 176 Jan 04, 2023
Lseng-iseng eksplor Machine Learning dengan menggunakan library Scikit-Learn

Kalo dengar istilah ML, biasanya rada ambigu. Soalnya punya beberapa kepanjangan, seperti Mobile Legend, Makan Lontong, Ma**ng L*v* dan lain-lain. Tapi pada repo ini membahas Machine Learning :)

Alfiyanto Kondolele 1 Apr 06, 2022
Regularization and Feature Selection in Least Squares Temporal Difference Learning

Regularization and Feature Selection in Least Squares Temporal Difference Learning Description This is Python implementations of Least Angle Regressio

Mina Parham 0 Jan 18, 2022
Python factor analysis library (PCA, CA, MCA, MFA, FAMD)

Prince is a library for doing factor analysis. This includes a variety of methods including principal component analysis (PCA) and correspondence anal

Max Halford 915 Dec 31, 2022
customer churn prediction prevention in telecom industry using machine learning and survival analysis

Telco Customer Churn Prediction - Plotly Dash Application Description This dash application allows you to predict telco customer churn using machine l

Benaissa Mohamed FayΓ§al 3 Nov 20, 2021
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023
Meerkat provides fast and flexible data structures for working with complex machine learning datasets.

Meerkat makes it easier for ML practitioners to interact with high-dimensional, multi-modal data. It provides simple abstractions for data inspection, model evaluation and model training supported by

Robustness Gym 115 Dec 12, 2022
XManager: A framework for managing machine learning experiments πŸ§‘β€πŸ”¬

XManager is a platform for packaging, running and keeping track of machine learning experiments. It currently enables one to launch experiments locally or on Google Cloud Platform (GCP). Interaction

DeepMind 620 Dec 27, 2022
Machine Learning for Time-Series with Python.Published by Packt

Machine-Learning-for-Time-Series-with-Python Become proficient in deriving insights from time-series data and analyzing a model’s performance Links Am

Packt 124 Dec 28, 2022
Mixing up the Invariant Information clustering architecture, with self supervised concepts from SimCLR and MoCo approaches

Self Supervised clusterer Combined IIC, and Moco architectures, with some SimCLR notions, to get state of the art unsupervised clustering while retain

Bendidi Ihab 9 Feb 13, 2022
A quick reference guide to the most commonly used patterns and functions in PySpark SQL

Using PySpark we can process data from Hadoop HDFS, AWS S3, and many file systems. PySpark also is used to process real-time data using Streaming and

Sundar Ramamurthy 53 Dec 21, 2022
Fourier-Bayesian estimation of stochastic volatility models

fourier-bayesian-sv-estimation Fourier-Bayesian estimation of stochastic volatility models Code used to run the numerical examples of "Bayesian Approa

15 Jun 20, 2022
CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

ZhihuiYangCS 8 Jun 07, 2022
OptaPy is an AI constraint solver for Python to optimize planning and scheduling problems.

OptaPy is an AI constraint solver for Python to optimize the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference S

OptaPy 208 Dec 27, 2022