๐Ÿš€ An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

Overview

Creating an End-to-End ML Application w/ PyTorch

๐Ÿš€ This project was created using the Made With ML boilerplate template. Check it out to start creating your own ML applications.

Overview

  • Why do we need to build end-to-end applications?
    • By building e2e applications, you ensure that your code is organized, tested, testable / interactive and easy to scale-up / assimilate with larger pipelines.
    • If you're someone in industry and are looking to showcase your work to future employers, it's no longer enough to just have code on Jupyter notebooks. ML is just another tool and you need to show that you can use it in conjunction with all the other software engineering disciplines (frontend, backend, devops, etc.). The perfect way to do this is to create end-to-end applications that utilize all these different facets.
  • What are the components of an end-to-end ML application?
    1. Basic experimentation in Jupyter notebooks.
      • We aren't going to completely dismiss notebooks because they're still great tool to iterate quickly. Check out the notebook for our task here โ†’ notebook
    2. Moving our code from notebooks to organized scripts.
      • Once we did some basic development (on downsized datasets), we want to move our code to scripts to reduce technical debt. We'll create functions and classes for different parts of the pipeline (data, model, train, etc.) so we can easily make them robust for different circumstances.
      • We used our own boilerplate to organize our code before moving any of the code from our notebook.
    3. Proper logging and testing for you code.
      • Log key events (preprocessing, training performance, etc.) using the built-in logging library. Also use logging to see new inputs and outputs during prediction to catch issues, etc.
      • You also need to properly test your code. You will add and update your functions and their tests over time but it's important to at least start testing crucial pieces of your code from the beginning. These typically include sanity checks with preprocessing and modeling functions to catch issues early. There are many options for testing Python code but we'll use pytest here.
    4. Experiment tracking.
      • We use Weights and Biases (WandB), where you can easily track all the metrics of your experiment, config files, performance details, etc. for free. Check out the Dashboards page for an overview and tutorials.
      • When you're developing your models, start with simple approaches first and then slowly add complexity. You should clearly document (README, articles and WandB reports) and save your progression from simple to more complex models so your audience can see the improvements. The ability to write well and document your thinking process is a core skill to have in research and industry.
      • WandB also has free tools for hyperparameter tuning (Sweeps) and for data/pipeline/model management (Artifacts).
    5. Robust prediction pipelines.
      • When you actually deploy an ML application for the real world to use, we don't just look at the softmax scores.
      • Before even doing any forward pass, we need to analyze the input and deem if it's within the manifold of the training data. If it's something new (or adversarial) we shouldn't send it down the ML pipeline because the results cannot be trusted.
      • During processes like proprocessing, we need to constantly observe what the model received. For example, if the input has a bunch of unknown tokens than we need to flag the prediction because it may not be reliable.
      • After the forward pass we need to do tests on the model's output as well. If the predicted class has a mediocre test set performance, then we need the class probability to be above some critical threshold. Similarly we can relax the threshold for classes where we do exceptionally well.
    6. Wrap your model as an API.
      • Now we start to modularize larger operations (single/batch predict, get experiment details, etc.) so others can use our application without having to execute granular code. There are many options for this like Flask, Django, FastAPI, etc. but we'll use FastAPI for the ease and performance boost.
      • We can also use a Dockerfile to create a Docker image that runs our API. This is a great way to package our entire application to scale it (horizontally and vertically) depending on requirements and usage.
    7. Create an interactive frontend for your application.
      • The best way to showcase your work is to let others easily play with it. We'll be using Streamlit to very quickly create an interactive medium for our application and use Heroku to serve it (1000 hours of usage per month).
      • This is also a great skill to have because in industry you'll need to create this to show key stakeholders and great to have in documentation as well.

Set up

virtualenv -p python3.6 venv
source venv/bin/activate
pip install -r requirements.txt
pip install torch==1.4.0

Download embeddings

python text_classification/utils.py

Training

python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove

Endpoints

uvicorn text_classification.app:app --host 0.0.0.0 --port 5000 --reload
GOTO: http://localhost:5000/docs

Prediction

Scripts

python text_classification/predict.py --text 'The Canadian government officials proposed the new federal law.'

cURL

curl "http://localhost:5000/predict" \
    -X POST -H "Content-Type: application/json" \
    -d '{
            "inputs":[
                {
                    "text":"The Wimbledon tennis tournament starts next week!"
                },
                {
                    "text":"The Canadian government officials proposed the new federal law."
                }
            ]
        }' | json_pp

Requests

import json
import requests

headers = {
    'Content-Type': 'application/json',
}

data = {
    "experiment_id": "latest",
    "inputs": [
        {
            "text": "The Wimbledon tennis tournament starts next week!"
        },
        {
            "text": "The Canadian minister signed in the new federal law."
        }
    ]
}

response = requests.post('http://0.0.0.0:5000/predict',
                         headers=headers, data=json.dumps(data))
results = json.loads(response.text)
print (json.dumps(results, indent=2, sort_keys=False))

Streamlit

streamlit run text_classification/streamlit.py
GOTO: http://localhost:8501

Tests

pytest

Docker

  1. Build image
docker build -t text-classification:latest -f Dockerfile .
  1. Run container
docker run -d -p 5000:5000 -p 6006:6006 --name text-classification text-classification:latest

Heroku

Set `WANDB_API_KEY` as an environment variable.

Directory structure

text-classification/
โ”œโ”€โ”€ datasets/                           - datasets
โ”œโ”€โ”€ logs/                               - directory of log files
|   โ”œโ”€โ”€ errors/                           - error log
|   โ””โ”€โ”€ info/                             - info log
โ”œโ”€โ”€ tests/                              - unit tests
โ”œโ”€โ”€ text_classification/                - ml scripts
|   โ”œโ”€โ”€ app.py                            - app endpoints
|   โ”œโ”€โ”€ config.py                         - configuration
|   โ”œโ”€โ”€ data.py                           - data processing
|   โ”œโ”€โ”€ models.py                         - model architectures
|   โ”œโ”€โ”€ predict.py                        - prediction script
|   โ”œโ”€โ”€ streamlit.py                      - streamlit app
|   โ”œโ”€โ”€ train.py                          - training script
|   โ””โ”€โ”€ utils.py                          - load embeddings and utilities
โ”œโ”€โ”€ wandb/                              - wandb experiment runs
โ”œโ”€โ”€ .dockerignore                       - files to ignore on docker
โ”œโ”€โ”€ .gitignore                          - files to ignore on git
โ”œโ”€โ”€ CODE_OF_CONDUCT.md                  - code of conduct
โ”œโ”€โ”€ CODEOWNERS                          - code owner assignments
โ”œโ”€โ”€ CONTRIBUTING.md                     - contributing guidelines
โ”œโ”€โ”€ Dockerfile                          - dockerfile to containerize app
โ”œโ”€โ”€ LICENSE                             - license description
โ”œโ”€โ”€ logging.json                        - logger configuration
โ”œโ”€โ”€ Procfile                            - process script for Heroku
โ”œโ”€โ”€ README.md                           - this README
โ”œโ”€โ”€ requirements.txt                    - requirementss
โ”œโ”€โ”€ setup.sh                            - streamlit setup for Heroku
โ””โ”€โ”€ sweeps.yaml                         - hyperparameter wandb sweeps config

Overfit to small subset

python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --data-size 0.1 --num-epochs 3

Experiments

  1. Random, unfrozen, embeddings
python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle
  1. GloVe, frozen, embeddings
python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove --freeze-embeddings
  1. GloVe, unfrozen, embeddings
python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove

Next steps

End-to-end topics that will be covered in subsequent lessons.

  • Utilizing wrappers like PyTorch Lightning to structure the modeling even more while getting some very useful utility.
  • Data / model version control (Artifacts, DVC, MLFlow, etc.)
  • Experiment tracking options (MLFlow, KubeFlow, WandB, Comet, Neptune, etc)
  • Hyperparameter tuning options (Optuna, Hyperopt, Sweeps)
  • Multi-process data loading
  • Dealing with imbalanced datasets
  • Distributed training for much larger models
  • GitHub actions for automatic testing during commits
  • Prediction fail safe techniques (input analysis, class-specific thresholds, etc.)

Helpful docker commands

โ€ข Build image

docker build -t madewithml:latest -f Dockerfile .

โ€ข Run container if using CMD ["python", "app.py"] or ENTRYPOINT [ "/bin/sh", "entrypoint.sh"]

docker run -p 5000:5000 --name madewithml madewithml:latest

โ€ข Get inside container if using CMD ["/bin/bash"]

docker run -p 5000:5000 -it madewithml /bin/bash

โ€ข Run container with mounted volume

docker run -p 5000:5000 -v $PWD:/root/madewithml/ --name madewithml madewithml:latest

โ€ข Other flags

-d: detached
-ti: interative terminal

โ€ข Clean up

docker stop $(docker ps -a -q)     # stop all containers
docker rm $(docker ps -a -q)       # remove all containers
docker rmi $(docker images -a -q)  # remove all images
Owner
Made With ML
Applied ML ยท MLOps ยท Production
Made With ML
Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Self-supervised Graph-level Representation Learning with Local and Global Structure Introduction This project is an implementation of ``Self-supervise

MilaGraph 50 Dec 09, 2022
Resources for the Ki testnet challenge

Ki Testnet Challenge This repository hosts ki-testnet-challenge. A set of scripts and resources to be used for the Ki Testnet Challenge What is the te

Ki Foundation 23 Aug 08, 2022
pytorchใฎใ‚นใƒฉใ‚คใ‚นไปฃๅ…ฅๆ“ไฝœใ‚’onnxใซๅค‰ๆ›ใ™ใ‚‹้š›ใซScatterNDใชใ‚‰ใชใ„ใ‚ˆใ†ใซใ™ใ‚‹ใ‚ตใƒณใƒ—ใƒซ

pytorch_remove_ScatterND pytorchใฎใ‚นใƒฉใ‚คใ‚นไปฃๅ…ฅๆ“ไฝœใ‚’onnxใซๅค‰ๆ›ใ™ใ‚‹้š›ใซScatterNDใชใ‚‰ใชใ„ใ‚ˆใ†ใซใ™ใ‚‹ใ‚ตใƒณใƒ—ใƒซใ€‚ ใ‚นใƒฉใ‚คใ‚นใ—ใŸtensorใซใใฎใพใพไปฃๅ…ฅใ—ใฆใ—ใพใ†ใจScatterNDใซใชใ‚‹ใŸใ‚ใ€่จˆ็ฎ—็ตๆžœใ‚’catใงๆ–ฐใ—ใ„tensorใซใ™ใ‚‹ใ€‚ python ver

2 Dec 01, 2022
YOLOv2 in PyTorch

YOLOv2 in PyTorch NOTE: This project is no longer maintained and may not compatible with the newest pytorch (after 0.4.0). This is a PyTorch implement

Long Chen 1.5k Jan 02, 2023
Benchmark datasets, data loaders, and evaluators for graph machine learning

Overview The Open Graph Benchmark (OGB) is a collection of benchmark datasets, data loaders, and evaluators for graph machine learning. Datasets cover

1.5k Jan 05, 2023
An example to implement a new backbone with OpenMMLab framework.

Backbone example on OpenMMLab framework English | ็ฎ€ไฝ“ไธญๆ–‡ Introduction This is an template repo about how to use OpenMMLab framework to develop a new bac

Ma Zerun 22 Dec 29, 2022
Learning to Identify Top Elo Ratings with A Dueling Bandits Approach

Learning to Identify Top Elo Ratings We propose two algorithms MaxIn-Elo and MaxIn-mElo to solve the top players identification on the transitive and

2 Jan 14, 2022
MLOps will help you to understand how to build a Continuous Integration and Continuous Delivery pipeline for an ML/AI project.

page_type languages products description sample python azure azure-machine-learning-service azure-devops Code which demonstrates how to set up and ope

1 Nov 01, 2021
SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers

SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers This repo contains our codes for the paper "No Parameters Left Behind: Sensitivity Gu

Chen Liang 23 Nov 07, 2022
An image classification app boilerplate to serve your deep learning models asap!

Image ๐Ÿ–ผ Classification App Boilerplate Have you been puzzled by tons of videos, blogs and other resources on the internet and don't know where and ho

Smaranjit Ghose 27 Oct 06, 2022
Official implementation of the paper "Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering"

Light Field Networks Project Page | Paper | Data | Pretrained Models Vincent Sitzmann*, Semon Rezchikov*, William Freeman, Joshua Tenenbaum, Frรฉdo Dur

Vincent Sitzmann 130 Dec 29, 2022
Implementation of Uformer, Attention-based Unet, in Pytorch

Uformer - Pytorch Implementation of Uformer, Attention-based Unet, in Pytorch. It will only offer the concat-cross-skip connection. This repository wi

Phil Wang 72 Dec 19, 2022
Tensorflow implementation of Swin Transformer model.

Swin Transformer (Tensorflow) Tensorflow reimplementation of Swin Transformer model. Based on Official Pytorch implementation. Requirements tensorflow

167 Jan 08, 2023
Programming with Neural Surrogates of Programs

Programming with Neural Surrogates of Programs

0 Dec 12, 2021
Speeding-Up Back-Propagation in DNN: Approximate Outer Product with Memory

Approximate Outer Product Gradient Descent with Memory Code for the numerical experiment of the paper Speeding-Up Back-Propagation in DNN: Approximate

2 Mar 02, 2022
Neural Surface Maps

Neural Surface Maps Official implementation of Neural Surface Maps - Luca Morreale, Noam Aigerman, Vladimir Kim, Niloy J. Mitra [Paper] [Project Page]

Luca Morreale 49 Dec 13, 2022
This is the 3D Implementation of ใ€ŠInconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentationใ€‹

CoraNet This is the 3D Implementation of ใ€ŠInconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentationใ€‹ Environment pytor

25 Nov 08, 2022
Official code for 'Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urban Driving Scenes'

PEBAL This repo contains the Pytorch implementation of our paper: Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urb

Yu Tian 117 Jan 03, 2023
alfred-py: A deep learning utility library for **human**

Alfred Alfred is command line tool for deep-learning usage. if you want split an video into image frames or combine frames into a single video, then a

JinTian 800 Jan 03, 2023
Voice Gender Recognition

In this project it was used some different Machine Learning models to identify the gender of a voice (Female or Male) based on some specific speech and voice attributes.

Anne Livia 1 Jan 27, 2022