A multi-tenant multi-client scalable product categorising demo stack

Overview

Better Categories 4All: A multi-tenant multi-client product categorising stack

The steps to reproduce training and inference are in the end of this file, sorry for the long explanation.

example workflow

Problem scope

We want to create a full product categorization stack for multiple clients. For each client, and each product we want to find the 5 most suitable categories.

Project structure

The project is split into two layers:

  • ML layer: the python package for training and serving model. It's a pipenv based project. The Pipfile include all required dependencies. The python environment generated by pipenv is used to run the training/inference and run also unit tests. Code is generic for all clients.
  • Orchestration layer: the Airflow DAGs for training and prediction. Each client has its own training DAG and its prediction DAG. These DAGs uses the Airflow BashOperator to execute training and prediction inside the pipenv environment.

img_1.png

Why one DAG per a client instead of a single DAG for all client ?

We could have a single DAG that train all clients. So each client has its own training task inside the same DAG. I chose rather to build a separate DAG for each client. Several reasons motivated my decision:

  • In my past experiences, some individual cients may have problem s with their data and it's more practical to have a DAG per client when it's come to day to day monitoring.
  • New clients may come and other may leave and we may endup with a DAG that keeps constantly adding new Task and loosing others and it's against airflow best practicies.
  • It make sens to have one failed DAG and 99 other successful DAGs rather than a single DAG failing all the time because of one random client training failing each day.

Training

In this part we will train a classification model for each client.

Training package

The package categories_classification include a training function train_model. It takes the following inputs:

  • client_id: the id of the client in training dataset
  • features: a list of features names to use in training
  • model_params: a dict of params to be passed to model python class.
  • training_date: the execution date of training, used to track the training run.

The chosen model is scikit-learn implementation of random forest sklearn.ensemble.RandomForestClassifier. For the sake of simplicity, we didn't fine tune model parameters, but optimal params can be set in config.

In addition to train_model function, a cli binary is created to be able to run training directly from command line. The binary command trainer runs the training:

pipenv run python categories_classification_cli.py trainer --help

Usage: categories_classification_cli.py trainer [OPTIONS]

Options:
  --client_id TEXT     The id of the client.  [required]
  --features TEXT      The list of input features.  [required]
  --model_params TEXT  Params to be passed to model.  [required]
  --training_date TEXT  The training date.  [required]
  --help               Show this message and exit.

Data and model paths

All data are stored in a command base path retrieved from environment variable DATA_PREFIX, default is ./data. Given a client id, training data is loaded from $DATA_PREFIX/train/client_id= /data_train.csv.gz .

Splitting data

Before training, data is split into training set and test set. The train set is used to train the model while the test set is used to evaluate the model after training. Evaluation score is logged.

Model tracking and versioning

The whole training event is tracked in Mlfow as a training run. Each client hash its own experiments and its own model name following the convention " _model". The tracking process saves also metrics and model parameters in the same run metadata.

Finally, the model is saved in Mlflow Registry with name " _model". Saving the model means a new model version is saved in Mlflow, as the same model may have multiple versions.

Prediction

In this part, we will predict product categories using previously trained model.

Prediction package

The package categories_classification include a prediction function predict_categories. It takes the following inputs:

  • client_id: the id of the client in training dataset
  • inference_date: an inference execution date to version output categories

The prediction is done through spark so that it can be done on big datasets. Prediction dataset is loaded in spark DataFrame. We use Mlflow to get the latest model version and load latest model. The model is then broadcasted in Spark in order to be available in Spark workers. To apply the model to the prediction dataset, I use a new Spark 3.0 experimental feature called mapInPandas. This Dataframe method maps an iterator of batches (pandas Dataframe) using a prediction used-defined function that outputs also a pandas Dataframe. This is done thanks to PyArrow efficient data transfer between Spark JVM and python pandas runtime.

Prediction function

The advantage of mapInPandas feature comparing to classic pandas_udf is that we can add more rows than we have as input. Thus for each product, we can output 5 predicted categories with their probabilities and ranked from 0 to 4. The predicted label are then persisted to filesystem as parquet dataset.

Model version retrieval

Before loading the model, we use Mlflow to get the latest version of the model. In production system we probabilities want to push model to staging, verify its metrics or validate it before passing it to production. Let's suppose that we are working the same stage line, we use MlflowClient to connect to Mlflow Registry and get the latest model version. The version is then used to build the latest model uri.

Reproducing training and inference

Pipenv initialization

First you need to check you have pipenv installed locally otherwise you can install it with pip install pipenv.

Then you need to initialize the pipenv environment with the following command:

make init-pipenv

This may take some time as it will install all required dependencies. Once done you can run linter (pylint) and unit tests:

make lint
make unit-tests

Airflow/Mlflow initialization

You need also to initialize the local airflow stack, thus building a custom airflow docker image including the pipenv environment, the mlflow image and initializing the Airflow database.

make init-airflow

Generate DAGs

Airflow dags needs to be generated using config file in conf/clients_config.yaml. It's already created with the 10 clients example datasets. But if you want you can add new clients or change the actual configuration. For each client you must include the list of features and optional model params.

Then, you can generate DAGs using the following command:

make generate-dags

This will can the script scripts/generate_dags.py which will:

  • load training and inference DAG templates from dags_templates, they are jinja2 templates.
  • load conf from conf/clients_config.yaml
  • render DAG for each client and each template

Start local Airflow

You can start local airflow with following command:

make start-airflow

Once all services started, you can go to you browser and visit:

  • Airflow UI in http://localhost:8080
  • Mlflow UI in http://localhost:5000

Run training and inference

In Airflow all DAGs are disabled by default. To run training for a client you can enable the DAG and it will immediately trigger the training.

Once the model in Mlflow, you can enable the inference DAG and it will immediately trigger a prediction.

Inspect result

To inspect result you run a local jupyter, you do it with:

make run-jupyter

Then visit notebook inspect_inference_result.ipynb and run it to check the prediction output.

A python bot that will allow you to have maximum luck during Veve drops.

VeveBot You can follow me here Github | Twitter Features: - Click on the purchase at the time of the drop. - Be able to choose to do more than one tes

Rodz 1 Dec 04, 2021
This code is for a bot which will find a Twitter user's most tweeted word and tweet that word, tagging said user

max_tweeted_word This code is for a bot which will find a Twitter user's most tweeted word and tweet that word, tagging said user The program uses twe

Yasho Bapat 1 Nov 29, 2021
AWS Quick Start Team

EKS CDK Quick Start (in Python) DEVELOPER PREVIEW NOTE: Thise project is currently available as a preview and should not be considered for production

AWS Quick Start 83 Sep 18, 2022
A mass account list editor for python

Account-List-Editor This is an mass account list editor Usage Run the editor.py file with python (python3 ./editor.py) Press a button (1/2) and drag &

ExtremeDev 1 Dec 20, 2021
Joshua McDonagh 1 Jan 24, 2022
A multi-tenant multi-client scalable product categorising demo stack

Better Categories 4All: A multi-tenant multi-client product categorising stack The steps to reproduce training and inference are in the end of this fi

7 Feb 15, 2022
Proxy-Bot - Python proxy bot for telegram

Proxy-Bot 🤖 Proxy bot between the main chat and a newcomer, allows all particip

Anton Shumakov 3 Apr 01, 2022
A visualization of people a user follows on Twitter

Twitter-Map This software allows the user to create maps of Twitter accounts. Installation git clone Oliver Greenwood 12 Jul 20, 2022

A discord program that will send a message to nearly every user in a discord server

Discord Mass DM Scrapes users from a discord server to promote/mass dm Report Bug · Request Feature Features Asynchronous Easy to use Free Auto scrape

dropout 56 Jan 02, 2023
Select random winners for a Twitter giveaway

twitter_picker Select random winners for a Twitter giveaway Once the Twitter giveaway (or airdrop) is closed, assign a number to each participant. The

Michael Rawner 1 Dec 11, 2021
Discord Mass Edit is a unique, purging related Discord tool that differs from the regular mass delete.

Discord Mass Edit is a unique, purging related Discord tool that differs from the regular mass delete. This tool will automatically edit every message in a chosen channel and change it to a random st

c0mpt0 1 Jul 27, 2022
Plazmix API wrapper for Python

An optimised, easy to use Plazmix API wrapper written in Python

Someone 2 Nov 16, 2021
A Discord Token Grabber/Stealer But It's in One Line of Coding

Discord-Token-Grabber-But-In-One-Line That's a Discord Token Grabber/Stealer But It's in One Line of Coding! The Name Says All 3

YoSoyAngi 2 Jan 11, 2022
Automatically Message From Discord Account

Discord-AutoMessage A robust and versatile solution for automated social interactions HOW TO INSTALL Open cmd cd into your project directory Run the f

13 Jul 11, 2022
Role Based Access Control for Slack-Bolt Applications

Role Based Access Control for Slack-Bolt Apps Role Based Access Control (RBAC) is a term applied to limiting the authorization for a specific operatio

Jeremy Schulman 7 Jan 06, 2022
Clipboard-watcher - Keep an eye on the apps that are using your clipboard

clipboard-watcher This repository contains the code of an experiment, in order t

Gonçalo Valério 48 Oct 13, 2022
Python client for numerbay.ai - the Numerai community marketplace

NumerBay Python API Programmatic interaction with numerbay.ai - the Numerai community marketplace. If you encounter a problem or have suggestions, fee

Numerai Council of Elders 5 Nov 30, 2022
PTV is a useful widget for trading view for doing paper trading when bar reply is enabled

PTV is a useful widget for trading view for doing paper trading when bar reply is enabled.(this feature did not implement in trading view)

Ali Moradi 39 Dec 26, 2022
A bot to share Facebook posts.

bot_share_facebook a bot to share Facebook posts. install & clone untuk menjalankan anda bisa melalui terminal contohnya termux, cmd, dan terminal lai

Muhammad Latif Harkat 7 Dec 07, 2022
A Tool to scrape URLs for a given domain from wayback machine, Commoncrawl and OTX Alienvault

Mr_URL Mr.URL fetches known URLs for a given domain from Wayback Machine, Commoncrawl and OTX Alienvault. It also finds old versions of any given URL

Stinger 9 Sep 05, 2022