A no-BS, dead-simple training visualizer for tf-keras

Overview


A no-BS, dead-simple training visualizer for tf-keras
PyPI version PyPI version

TrainingDashboard

Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook with a simple callback. Features:

  • Plots the training loss and a training metric, updated at the end of each batch
  • Plots training and validation losses, updated at the end of each epoch
  • For each metric, plots training and validation values, updated at the end of each epoch
  • Tabulates losses and metrics (both train and validation) and highlights the highest and lowest values in each column

Why should I use this over tensorboard?
This is way simpler to use.

What about livelossplot?
AFAIK, livelossplot does not support intra-epoch loss/metric plotting. Also, TrainingDashboard uses bqplot for plotting, which provides support for much more interactive elements like tooltips (currently a TODO). On the other hand, livelossplot is a much more mature project, and you should use it if you have a specific use case.

Installation

TrainingDashboard can be installed from PyPI with the following command:

pip install training-dashboard

Alternatively, you can clone this repository and run the following command from the root directory:

pip install .

Usage

TrainingDashboard is a tf-keras callback and should be used as such. It takes the following optional arguments:

  • validation (bool): whether validation data is being used or not
  • min_loss (float): the minimum possible value of the loss function, to fix the lower bound of the y-axis
  • max_loss (float): the maximum possible value of the loss function, to fix the upper bound of the y-axis
  • metrics (list): list of metrics that should be considered for plotting
  • min_metric_dict (dict): dictionary mapping each (or a subset) of the metrics to their minimum possible value, to fix the lower bound of the y-axis
  • max_metric_dict (dict): dictionary mapping each (or a subset) of the metrics to their maximum possible value, to fix the upper bound of the y-axis
  • batch_step (int): step size for plotting the results within each epoch. If the time to process each batch is very small, plotting at each step may cause the training to slow down significantly. In such cases, it is advisable to skip a few batches between each update.
from training_dashboard import TrainingDashboard
model.fit(X,
          Y,
          epochs=10,
          callbacks=[TrainingDashboard()])

or, a more elaborate example:

from training_dashboard import TrainingDashboard
dashboard = TrainingDashboard(validation=True, # because we are using validation data and want to track its metrics
                             min_loss=0, # we want the loss axes to be fixed on the lower end
                             metrics=["accuracy", "auc"], # metrics that we want plotted
                             batch_step=10, # plot every 10th batch
                             min_metric_dict={"accuracy": 0, "auc": 0}, # minimum possible value for metrics used
                             max_metric_dict={"accuracy": 1, "auc": 1}) # maximum possible value for metrics used
model.fit(x_train,
          y_train,
          batch_size=512,
          epochs=25,
          verbose=1,
          validation_split=0.2,
          callbacks=[dashboard])

For a more detailed example, check mnist_example.ipynb inside the examples folder.

Support

Reach out to me at one of the following places!

Twitter: @vibhuagrawal
Email: vibhu[dot]agrawal14[at]gmail

License

Project is distributed under MIT License.

Owner
Vibhu Agrawal
Vibhu Agrawal
In this tutorial, you will perform inference across 10 well-known pre-trained object detectors and fine-tune on a custom dataset. Design and train your own object detector.

Object Detection Object detection is a computer vision task for locating instances of predefined objects in images or videos. In this tutorial, you wi

Ibrahim Sobh 62 Dec 25, 2022
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Google_Landmark_Retrieval_2021_2nd_Place_Solution The 2nd place solution of 2021 google landmark retrieval on kaggle. Environment We use cuda 11.1/pyt

229 Dec 13, 2022
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

PlaneTR: Structure-Guided Transformers for 3D Plane Recovery This is the official implementation of our ICCV 2021 paper News There maybe some bugs in

73 Nov 30, 2022
[NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".

SMYRF: Efficient attention using asymmetric clustering Get started: Abstract We propose a novel type of balanced clustering algorithm to approximate a

Giannis Daras 46 Dec 22, 2022
TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection

TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection; Accepted by ICCV2021. Note: The complete code (including training and t

S.X.Zhang 84 Dec 13, 2022
The repo for the paper "I3CL: Intra- and Inter-Instance Collaborative Learning for Arbitrary-shaped Scene Text Detection".

I3CL: Intra- and Inter-Instance Collaborative Learning for Arbitrary-shaped Scene Text Detection Updates | Introduction | Results | Usage | Citation |

33 Jan 05, 2023
Code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2021

The repo provides the code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2

Yuning Mao 18 May 24, 2022
A Python package for faster, safer, and simpler ML processes

Bender 🤖 A Python package for faster, safer, and simpler ML processes. Why use bender? Bender will make your machine learning processes, faster, safe

Otovo 6 Dec 13, 2022
Official code for the ICLR 2021 paper Neural ODE Processes

Neural ODE Processes Official code for the paper Neural ODE Processes (ICLR 2021). Abstract Neural Ordinary Differential Equations (NODEs) use a neura

Cristian Bodnar 50 Oct 28, 2022
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
AI-Fitness-Tracker - AI Fitness Tracker With Python

AI-Fitness-Tracker We have build a AI based Fitness Tracker using OpenCV and Pyt

Sharvari Mangale 5 Feb 09, 2022
CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer

CSAW-M This repository contains code for CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer. Source code for tr

Yue Liu 7 Oct 11, 2022
[SIGMETRICS 2022] One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search paper | website One Proxy Device Is Enough for Hardware-Aware Neural Architec

10 Dec 16, 2022
An unofficial styleguide and best practices summary for PyTorch

A PyTorch Tools, best practices & Styleguide This is not an official style guide for PyTorch. This document summarizes best practices from more than a

IgorSusmelj 1.5k Jan 05, 2023
Train the HRNet model on ImageNet

High-resolution networks (HRNets) for Image classification News [2021/01/20] Add some stronger ImageNet pretrained models, e.g., the HRNet_W48_C_ssld_

HRNet 866 Jan 04, 2023
The Agriculture Domain of ERPNext comes with features to record crops and land

Agriculture The Agriculture Domain of ERPNext comes with features to record crops and land, track plant, soil, water, weather analytics, and even trac

Frappe 21 Jan 02, 2023
Source code for Task-Aware Variational Adversarial Active Learning

Contrastive Coding for Active Learning under Class Distribution Mismatch Official PyTorch implementation of ["Contrastive Coding for Active Learning u

27 Nov 23, 2022
The dataset of tweets pulling from Twitters with keyword: Hydroxychloroquine, location: US, Time: 2020

HCQ_Tweet_Dataset: FREE to Download. Keywords: HCQ, hydroxychloroquine, tweet, twitter, COVID-19 This dataset is associated with the paper "Understand

2 Mar 16, 2022
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022
Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"

Visually Grounded Bert Language Model This repository is the official implementation of Explainable Semantic Space by Grounding Language to Vision wit

17 Dec 17, 2022