🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

Overview

Creating an End-to-End ML Application w/ PyTorch

🚀 This project was created using the Made With ML boilerplate template. Check it out to start creating your own ML applications.

Overview

  • Why do we need to build end-to-end applications?
    • By building e2e applications, you ensure that your code is organized, tested, testable / interactive and easy to scale-up / assimilate with larger pipelines.
    • If you're someone in industry and are looking to showcase your work to future employers, it's no longer enough to just have code on Jupyter notebooks. ML is just another tool and you need to show that you can use it in conjunction with all the other software engineering disciplines (frontend, backend, devops, etc.). The perfect way to do this is to create end-to-end applications that utilize all these different facets.
  • What are the components of an end-to-end ML application?
    1. Basic experimentation in Jupyter notebooks.
      • We aren't going to completely dismiss notebooks because they're still great tool to iterate quickly. Check out the notebook for our task here → notebook
    2. Moving our code from notebooks to organized scripts.
      • Once we did some basic development (on downsized datasets), we want to move our code to scripts to reduce technical debt. We'll create functions and classes for different parts of the pipeline (data, model, train, etc.) so we can easily make them robust for different circumstances.
      • We used our own boilerplate to organize our code before moving any of the code from our notebook.
    3. Proper logging and testing for you code.
      • Log key events (preprocessing, training performance, etc.) using the built-in logging library. Also use logging to see new inputs and outputs during prediction to catch issues, etc.
      • You also need to properly test your code. You will add and update your functions and their tests over time but it's important to at least start testing crucial pieces of your code from the beginning. These typically include sanity checks with preprocessing and modeling functions to catch issues early. There are many options for testing Python code but we'll use pytest here.
    4. Experiment tracking.
      • We use Weights and Biases (WandB), where you can easily track all the metrics of your experiment, config files, performance details, etc. for free. Check out the Dashboards page for an overview and tutorials.
      • When you're developing your models, start with simple approaches first and then slowly add complexity. You should clearly document (README, articles and WandB reports) and save your progression from simple to more complex models so your audience can see the improvements. The ability to write well and document your thinking process is a core skill to have in research and industry.
      • WandB also has free tools for hyperparameter tuning (Sweeps) and for data/pipeline/model management (Artifacts).
    5. Robust prediction pipelines.
      • When you actually deploy an ML application for the real world to use, we don't just look at the softmax scores.
      • Before even doing any forward pass, we need to analyze the input and deem if it's within the manifold of the training data. If it's something new (or adversarial) we shouldn't send it down the ML pipeline because the results cannot be trusted.
      • During processes like proprocessing, we need to constantly observe what the model received. For example, if the input has a bunch of unknown tokens than we need to flag the prediction because it may not be reliable.
      • After the forward pass we need to do tests on the model's output as well. If the predicted class has a mediocre test set performance, then we need the class probability to be above some critical threshold. Similarly we can relax the threshold for classes where we do exceptionally well.
    6. Wrap your model as an API.
      • Now we start to modularize larger operations (single/batch predict, get experiment details, etc.) so others can use our application without having to execute granular code. There are many options for this like Flask, Django, FastAPI, etc. but we'll use FastAPI for the ease and performance boost.
      • We can also use a Dockerfile to create a Docker image that runs our API. This is a great way to package our entire application to scale it (horizontally and vertically) depending on requirements and usage.
    7. Create an interactive frontend for your application.
      • The best way to showcase your work is to let others easily play with it. We'll be using Streamlit to very quickly create an interactive medium for our application and use Heroku to serve it (1000 hours of usage per month).
      • This is also a great skill to have because in industry you'll need to create this to show key stakeholders and great to have in documentation as well.

Set up

virtualenv -p python3.6 venv
source venv/bin/activate
pip install -r requirements.txt
pip install torch==1.4.0

Download embeddings

python text_classification/utils.py

Training

python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove

Endpoints

uvicorn text_classification.app:app --host 0.0.0.0 --port 5000 --reload
GOTO: http://localhost:5000/docs

Prediction

Scripts

python text_classification/predict.py --text 'The Canadian government officials proposed the new federal law.'

cURL

curl "http://localhost:5000/predict" \
    -X POST -H "Content-Type: application/json" \
    -d '{
            "inputs":[
                {
                    "text":"The Wimbledon tennis tournament starts next week!"
                },
                {
                    "text":"The Canadian government officials proposed the new federal law."
                }
            ]
        }' | json_pp

Requests

import json
import requests

headers = {
    'Content-Type': 'application/json',
}

data = {
    "experiment_id": "latest",
    "inputs": [
        {
            "text": "The Wimbledon tennis tournament starts next week!"
        },
        {
            "text": "The Canadian minister signed in the new federal law."
        }
    ]
}

response = requests.post('http://0.0.0.0:5000/predict',
                         headers=headers, data=json.dumps(data))
results = json.loads(response.text)
print (json.dumps(results, indent=2, sort_keys=False))

Streamlit

streamlit run text_classification/streamlit.py
GOTO: http://localhost:8501

Tests

pytest

Docker

  1. Build image
docker build -t text-classification:latest -f Dockerfile .
  1. Run container
docker run -d -p 5000:5000 -p 6006:6006 --name text-classification text-classification:latest

Heroku

Set `WANDB_API_KEY` as an environment variable.

Directory structure

text-classification/
├── datasets/                           - datasets
├── logs/                               - directory of log files
|   ├── errors/                           - error log
|   └── info/                             - info log
├── tests/                              - unit tests
├── text_classification/                - ml scripts
|   ├── app.py                            - app endpoints
|   ├── config.py                         - configuration
|   ├── data.py                           - data processing
|   ├── models.py                         - model architectures
|   ├── predict.py                        - prediction script
|   ├── streamlit.py                      - streamlit app
|   ├── train.py                          - training script
|   └── utils.py                          - load embeddings and utilities
├── wandb/                              - wandb experiment runs
├── .dockerignore                       - files to ignore on docker
├── .gitignore                          - files to ignore on git
├── CODE_OF_CONDUCT.md                  - code of conduct
├── CODEOWNERS                          - code owner assignments
├── CONTRIBUTING.md                     - contributing guidelines
├── Dockerfile                          - dockerfile to containerize app
├── LICENSE                             - license description
├── logging.json                        - logger configuration
├── Procfile                            - process script for Heroku
├── README.md                           - this README
├── requirements.txt                    - requirementss
├── setup.sh                            - streamlit setup for Heroku
└── sweeps.yaml                         - hyperparameter wandb sweeps config

Overfit to small subset

python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --data-size 0.1 --num-epochs 3

Experiments

  1. Random, unfrozen, embeddings
python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle
  1. GloVe, frozen, embeddings
python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove --freeze-embeddings
  1. GloVe, unfrozen, embeddings
python text_classification/train.py \
    --data-url https://raw.githubusercontent.com/madewithml/lessons/master/data/news.csv --lower --shuffle --use-glove

Next steps

End-to-end topics that will be covered in subsequent lessons.

  • Utilizing wrappers like PyTorch Lightning to structure the modeling even more while getting some very useful utility.
  • Data / model version control (Artifacts, DVC, MLFlow, etc.)
  • Experiment tracking options (MLFlow, KubeFlow, WandB, Comet, Neptune, etc)
  • Hyperparameter tuning options (Optuna, Hyperopt, Sweeps)
  • Multi-process data loading
  • Dealing with imbalanced datasets
  • Distributed training for much larger models
  • GitHub actions for automatic testing during commits
  • Prediction fail safe techniques (input analysis, class-specific thresholds, etc.)

Helpful docker commands

• Build image

docker build -t madewithml:latest -f Dockerfile .

• Run container if using CMD ["python", "app.py"] or ENTRYPOINT [ "/bin/sh", "entrypoint.sh"]

docker run -p 5000:5000 --name madewithml madewithml:latest

• Get inside container if using CMD ["/bin/bash"]

docker run -p 5000:5000 -it madewithml /bin/bash

• Run container with mounted volume

docker run -p 5000:5000 -v $PWD:/root/madewithml/ --name madewithml madewithml:latest

• Other flags

-d: detached
-ti: interative terminal

• Clean up

docker stop $(docker ps -a -q)     # stop all containers
docker rm $(docker ps -a -q)       # remove all containers
docker rmi $(docker images -a -q)  # remove all images
Owner
Made With ML
Applied ML · MLOps · Production
Made With ML
Keras-1D-NN-Classifier

Keras-1D-NN-Classifier This code is based on the reference codes linked below. reference 1, reference 2 This code is for 1-D array data classification

Jae-Hoon Shim 6 May 18, 2021
Lipstick ain't enough: Beyond Color-Matching for In-the-Wild Makeup Transfer (CVPR 2021)

Table of Content Introduction Datasets Getting Started Requirements Usage Example Training & Evaluation CPM: Color-Pattern Makeup Transfer CPM is a ho

VinAI Research 248 Dec 13, 2022
BMW TechOffice MUNICH 148 Dec 21, 2022
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
Dataset and Code for ICCV 2021 paper "Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme"

Dataset and Code for RealVSR Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme Xi Yang, Wangmeng Xiang,

Xi Yang 92 Jan 04, 2023
Ganilla - Official Pytorch implementation of GANILLA

GANILLA We provide PyTorch implementation for: GANILLA: Generative Adversarial Networks for Image to Illustration Translation. Paper Arxiv Updates (Fe

Samet Hi 462 Dec 05, 2022
Simulated garment dataset for virtual try-on

Simulated garment dataset for virtual try-on This repository contains the dataset used in the following papers: Self-Supervised Collision Handling via

33 Dec 20, 2022
Session-based Recommendation, CoHHN, price preferences, interest preferences, Heterogeneous Hypergraph, Co-guided Learning, SIGIR2022

This is our implementation for the paper: Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation Xiaokun Zhang, Bo

Xiaokun Zhang 27 Dec 02, 2022
Optical Character Recognition + Instance Segmentation for russian and english languages

Распознавание рукописного текста в школьных тетрадях Соревнование, проводимое в рамках олимпиады НТО, разработанное Сбером. Платформа ODS. Результаты

Gerasimov Maxim 21 Dec 19, 2022
Compare neural networks by their feature similarity

PyTorch Model Compare A tiny package to compare two neural networks in PyTorch. There are many ways to compare two neural networks, but one robust and

Anand Krishnamoorthy 181 Jan 04, 2023
Easy genetic ancestry predictions in Python

ezancestry Easily visualize your direct-to-consumer genetics next to 2500+ samples from the 1000 genomes project. Evaluate the performance of a custom

Kevin Arvai 38 Jan 02, 2023
Earth Vision Foundation

EVer - A Library for Earth Vision Researcher EVer is a Pytorch-based Python library to simplify the training and inference of the deep learning model.

Zhuo Zheng 34 Nov 26, 2022
Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22)

Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22) Ok-Topk is a scheme for distributed training with sparse gradients

Shigang Li 9 Oct 29, 2022
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022
YOLOX Win10 Project

Introduction 这是一个用于Windows训练YOLOX的项目,相比于官方项目,做了一些适配和修改: 1、解决了Windows下import yolox失败,No such file or directory: 'xxx.xml'等路径问题 2、CUDA out of memory等显存不

5 Jun 08, 2022
High-resolution networks and Segmentation Transformer for Semantic Segmentation

High-resolution networks and Segmentation Transformer for Semantic Segmentation Branches This is the implementation for HRNet + OCR. The PyTroch 1.1 v

HRNet 2.8k Jan 07, 2023
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
The code repository for "RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection" (ACM MM'21)

RCNet: Reverse Feature Pyramid and Cross-scale Shift Network for Object Detection (ACM MM'21) By Zhuofan Zong, Qianggang Cao, Biao Leng Introduction F

TempleX 9 Jul 30, 2022
The Wearables Development Toolkit - a development environment for activity recognition applications with sensor signals

Wearables Development Toolkit (WDK) The Wearables Development Toolkit (WDK) is a framework and set of tools to facilitate the iterative development of

Juan Haladjian 114 Nov 27, 2022