A no-BS, dead-simple training visualizer for tf-keras

Overview


A no-BS, dead-simple training visualizer for tf-keras
PyPI version PyPI version

TrainingDashboard

Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook with a simple callback. Features:

  • Plots the training loss and a training metric, updated at the end of each batch
  • Plots training and validation losses, updated at the end of each epoch
  • For each metric, plots training and validation values, updated at the end of each epoch
  • Tabulates losses and metrics (both train and validation) and highlights the highest and lowest values in each column

Why should I use this over tensorboard?
This is way simpler to use.

What about livelossplot?
AFAIK, livelossplot does not support intra-epoch loss/metric plotting. Also, TrainingDashboard uses bqplot for plotting, which provides support for much more interactive elements like tooltips (currently a TODO). On the other hand, livelossplot is a much more mature project, and you should use it if you have a specific use case.

Installation

TrainingDashboard can be installed from PyPI with the following command:

pip install training-dashboard

Alternatively, you can clone this repository and run the following command from the root directory:

pip install .

Usage

TrainingDashboard is a tf-keras callback and should be used as such. It takes the following optional arguments:

  • validation (bool): whether validation data is being used or not
  • min_loss (float): the minimum possible value of the loss function, to fix the lower bound of the y-axis
  • max_loss (float): the maximum possible value of the loss function, to fix the upper bound of the y-axis
  • metrics (list): list of metrics that should be considered for plotting
  • min_metric_dict (dict): dictionary mapping each (or a subset) of the metrics to their minimum possible value, to fix the lower bound of the y-axis
  • max_metric_dict (dict): dictionary mapping each (or a subset) of the metrics to their maximum possible value, to fix the upper bound of the y-axis
  • batch_step (int): step size for plotting the results within each epoch. If the time to process each batch is very small, plotting at each step may cause the training to slow down significantly. In such cases, it is advisable to skip a few batches between each update.
from training_dashboard import TrainingDashboard
model.fit(X,
          Y,
          epochs=10,
          callbacks=[TrainingDashboard()])

or, a more elaborate example:

from training_dashboard import TrainingDashboard
dashboard = TrainingDashboard(validation=True, # because we are using validation data and want to track its metrics
                             min_loss=0, # we want the loss axes to be fixed on the lower end
                             metrics=["accuracy", "auc"], # metrics that we want plotted
                             batch_step=10, # plot every 10th batch
                             min_metric_dict={"accuracy": 0, "auc": 0}, # minimum possible value for metrics used
                             max_metric_dict={"accuracy": 1, "auc": 1}) # maximum possible value for metrics used
model.fit(x_train,
          y_train,
          batch_size=512,
          epochs=25,
          verbose=1,
          validation_split=0.2,
          callbacks=[dashboard])

For a more detailed example, check mnist_example.ipynb inside the examples folder.

Support

Reach out to me at one of the following places!

Twitter: @vibhuagrawal
Email: vibhu[dot]agrawal14[at]gmail

License

Project is distributed under MIT License.

Owner
Vibhu Agrawal
Vibhu Agrawal
Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"

merlot_reserve Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound" MERLOT Reserve (in submission) is a mo

Rowan Zellers 92 Dec 11, 2022
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 05, 2023
This project contains an implemented version of Face Detection using OpenCV and Mediapipe. This is a code snippet and can be used in projects.

Live-Face-Detection Project Description: In this project, we will be using the live video feed from the camera to detect Faces. It will also detect so

Hassan Shahzad 3 Oct 02, 2021
Comp445 project - Data Communications & Computer Networks

COMP-445 Data Communications & Computer Networks Change Python version in Conda

Peng Zhao 2 Oct 03, 2022
Awesome Weak-Shot Learning

Awesome Weak-Shot Learning In weak-shot learning, all categories are split into non-overlapped base categories and novel categories, in which base cat

BCMI 162 Dec 30, 2022
App customer segmentation cohort rfm clustering

CUSTOMER SEGMENTATION COHORT RFM CLUSTERING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU Nên chuyển qua theme màu dark thì sẽ nhìn đẹp hơn https://customer-segmentat

hieulmsc 3 Dec 18, 2021
Demonstrates iterative FGSM on Apple's NeuralHash model.

apple-neuralhash-attack Demonstrates iterative FGSM on Apple's NeuralHash model. TL;DR: It is possible to apply noise to CSAM images and make them loo

Lim Swee Kiat 11 Jun 23, 2022
Keras implementation of PersonLab for Multi-Person Pose Estimation and Instance Segmentation.

PersonLab This is a Keras implementation of PersonLab for Multi-Person Pose Estimation and Instance Segmentation. The model predicts heatmaps and vari

OCTI 160 Dec 21, 2022
BMN: Boundary-Matching Network

BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

qinxin 260 Dec 06, 2022
Voice Gender Recognition

In this project it was used some different Machine Learning models to identify the gender of a voice (Female or Male) based on some specific speech and voice attributes.

Anne Livia 1 Jan 27, 2022
Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions

Natural Posterior Network This repository provides the official implementation o

Oliver Borchert 54 Dec 06, 2022
Distinguishing Commercial from Editorial Content in News

Distinguishing Commercial from Editorial Content in News In this repository you can find the following: An anonymized version of the data used for my

Timo Kats 3 Sep 26, 2022
Official PyTorch Implementation for InfoSwap: Information Bottleneck Disentanglement for Identity Swapping

InfoSwap: Information Bottleneck Disentanglement for Identity Swapping Code usage Please check out the user manual page. Paper Gege Gao, Huaibo Huang,

Grace Hešeri 56 Dec 20, 2022
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP

Wav2CLIP 🚧 WIP 🚧 Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP 📄 🔗 Ho-Hsiang Wu, Prem Seetharaman

Descript 240 Dec 13, 2022
PyQt6 configuration in yaml format providing the most simple script.

PyamlQt(ぴゃむるきゅーと) PyQt6 configuration in yaml format providing the most simple script. Requirements yaml PyQt6, ( PyQt5 ) Installation pip install Pya

Ar-Ray 7 Aug 15, 2022
TensorFlow (Python) implementation of DeepTCN model for multivariate time series forecasting.

DeepTCN TensorFlow TensorFlow (Python) implementation of multivariate time series forecasting model introduced in Chen, Y., Kang, Y., Chen, Y., & Wang

Flavia Giammarino 21 Dec 19, 2022
Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021

Convolutional MLP ConvMLP: Hierarchical Convolutional MLPs for Vision Preprint link: ConvMLP: Hierarchical Convolutional MLPs for Vision By Jiachen Li

SHI Lab 143 Jan 03, 2023
Progressive Image Deraining Networks: A Better and Simpler Baseline

Progressive Image Deraining Networks: A Better and Simpler Baseline [arxiv] [pdf] [supp] Introduction This paper provides a better and simpler baselin

190 Dec 01, 2022
A unet implementation for Image semantic segmentation

Unet-pytorch a unet implementation for Image semantic segmentation 参考网上的Unet做分割的代码,做了一个针对kaggle地盐识别的,请去以下地址获取数据集: https://www.kaggle.com/c/tgs-salt-id

Rabbit 3 Jun 29, 2022