Keras Model Implementation Walkthrough

Overview

Keras Model Implementation Walkthrough

This guide has a companion blog post.

The Keras Model class is one of the centerpieces of the framework. It encapsulates metric tracking, callbacks, distribution, training loops, various input types, and a wide variety of other training related behavior. This has led to the Model class containing a large volume of code that can be intimidating to sift through.

This guide walks through a simplified model implementation in order to help you understand what model does under the hood. After following along with this guide you will understand how the keras model class achieves the behavior listed above.

The SimplifiedModel can also serve as a starting point for those looking to implement custom models or training loops. A forkable template using the SimplifiedModel class be found on my github at https://github.com/lukewood/ModelWalkthrough.

For the sake of brevity, the class written in this guide subclasses keras.layers.Layer to leverage some helper functions, such as Keras' __call__ implementation.

Core Implementation

Let's create a basic implementation that supports compile(), fit(), predict(), and eval() before we introduce distribution strategy, metric tracking, callbacks, and other features.

Note that the SimplifiedModel class operates as a keras.Model subclass, overriding the call() method to produce predictions.

import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.optimizers as optimizers

class SimplifiedModel(keras.layers.Layer):
  """SimplifiedModel is a stripped down barebones version of keras.Model."""

  def __init__(self, *args, **kwargs):
    super(SimplifiedModel, self).__init__(*args, **kwargs)
    self.dense = keras.layers.Dense(1)
    self.distribute_strategy = tf.distribute.get_strategy()
  
  def compile(self,
              optimizer='rmsprop',
              loss=None,
              metrics=None,
              loss_weights=None):
    with self.distribute_strategy.scope():
      self.optimizer = optimizers.get(optimizer)
      self.loss = loss
      self.metrics_list = metrics if isinstance(metrics, list) else [metrics]

  def call(self, inputs):
    return self.dense(inputs)

  def predict_step(self, x):
    return self(x, training=False)

  def train_step(self, data):
    x, y = data
    # Run forward pass.
    with tf.GradientTape() as tape:
      y_pred = self(x, training=True)
      loss = self.loss(y, y_pred)
      for extra_loss in self.losses:
        loss += scale_loss_for_distribution(extra_loss)

    self.optimizer.minimize(loss, self.trainable_variables, tape=tape)

    return_metrics = {'loss': loss}
    for metric in self.metrics_list:
      metric.update_state(y, y_pred, None)
      result = metric.result()
      if isinstance(result, dict):
        return_metrics.update(result)
      else:
        return_metrics[metric.name] = result
    return return_metrics

  def fit(self, dataset: tf.data.Dataset, epochs=1, verbose=1):
    """This simplified version of fit only accepts a TensorFlow dataset.

    Args:
      dataset: tf.data.Dataset, must yield a tuple of inputs and a one hot
        encoded vector containing labels
      epochs: number of passes to perform over the verbosity
      verbose: verbosity of logging during fit
    """
    for epoch in range(epochs):
      for batch in dataset:
        metrics = self.train_step(batch)
        metric_str = ', '.join(
            [f'{metric_name}: {val}' for metric_name, val in metrics.items()])
        # Minimal progress logging implementation
        print(f'\repoch: ({epoch+1}/{epochs}), {metric_str}', end='')
    print()

  def test_step(self, x, y):
    y_pred = self(x, training=False)
    loss = self.loss(y, y_pred)
    for extra_loss in self.losses:
      loss += scale_loss_for_distribution(extra_loss)

    return_metrics = {
        'loss': loss,
    }

    for metric in self.metrics_list:
      metric.update_state(y, y_pred, None)
      result = metric.result()
      if isinstance(result, dict):
        return_metrics.update(result)
      else:
        return_metrics[metric.name] = result
    return return_metrics

  def evaluate(self, dataset):
    self.reset_metrics()
    metrics_aggregate = []
    for xs, ys in dataset:
      self.reset_metrics()
      metrics_aggregate.append(self.test_step(xs, ys))

    if not metrics_aggregate:
      raise ValueError('dataset must contain at least one batch of samples.  '
                       f'Received: {dataset}')

    result = {}
    for k in metrics_aggregate[0]:
      result[k] = 0.

    for metric_iter in metrics_aggregate:
      for k, v in metric_iter.items():
        result[k] += v / len(metrics_aggregate)
    return result

  def predict(self, dataset):
    result = []
    for xs in dataset:
      result.append(self(xs, training=False))
    return tf.concat(result, axis=0)

  def reset_metrics(self):
    for metric in self.metrics_list:
      metric.reset_state()

When you first construct a model, it exists in an uncompiled state. In this state the optimizer, compiled metrics, and loss have not yet been created. The compile() method creates the optimizer, takes a loss function, and creates a reference to a list of metrics. These are all later used in training.

train_step, predict_step, and eval_step all contain the logic to perform a single step of their corresponding methods: fit(), predict(), and evaluate() respectively. Note that while predict_step() simply invokes call, train_step() and eval_step() track loss and metrics.

The simplified model class expects a tf.data.Dataset as an input to fit(), predict(), and evaluate(). The model offloads the batching behavior to the dataset. To use the model, you'd do something like this:

import numpy as np
import tensorflow.keras.losses as losses

model = SimplifiedModel()

x, y = np.zeros((1000, 10)), np.ones((1000, 1))
ds = tf.data.Dataset.from_tensor_slices((x, y))
# Our model expects the dataset to be batched
ds = ds.batch(10)

ds_test = tf.data.Dataset.from_tensor_slices((x))
ds_test = ds_test.batch(10)

model.compile('sgd', 
              loss=losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.SUM), 
              metrics=[tf.keras.metrics.MeanAbsolutePercentageError()]
)
model.build(input_shape=(None, 10))
metrics = model.evaluate(ds)
print('Metrics Before Fit:', metrics)

model.fit(ds, epochs=10, verbose=2)

metrics = model.evaluate(ds)
print('Metrics After Fit:', metrics)
Metrics Before Fit: {'loss': 
   
    , 'mean_absolute_percentage_error': 
    
     }
epoch: (10/10), loss: 1.4210854715202004e-13, mean_absolute_percentage_error: 0.5994006395339966
Metrics After Fit: {'loss': 
     
      , 'mean_absolute_percentage_error': 
      
       }

      
     
    
   

This model class implements our expected behavior, but it's missing some critical logic that keras.Model implements.

Perhaps most notably, this model does not support distribution.

Batched Execution, Compiled train_step()

Currently we are executing our train_step() calls one at a time, the train_step() function is not a compiled tf.function, and the model does not work the TensorFlow distribution strategies. In this section we will implement all of these performance enhancements and net massive performance gains.

First, we begin by wrapping train_step() in a compiled function:

class SimplifiedModel(keras.Model):
  # ... 
  def make_train_function(self):
    if self.train_function:
      return self.train_function

    def step_function(model, iterator):

      def run_step(data):
        outputs = model.train_step(data)
        model._train_counter.assign_add(1)  # pylint: disable=protected-access
        return outputs

      data = next(iterator)
      outputs = model.distribute_strategy.run(run_step, args=(data,))
      return model.distribute_strategy.unwrap(outputs)[0]

    def train_function(iterator):
      """Runs a training execution with multiple steps."""
      # Autograph cannot infer the return type of undeclared non-Tensor
      # variables from inside loops. The limitations documentation explains this
      # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md
      names = [m.name for m in self.metrics_list] + ['loss']
      outputs = dict.fromkeys(names, 0.)
      for _ in tf.range(self.steps_per_execution):
        outputs = step_function(self, iterator)
      return outputs

    train_function = tf.function(train_function, experimental_relax_shapes=True)

    # A separate function is needed to prevent self-referential
    # infinitely-recursive closures
    cluster_train_function = None
    if self._cluster_coordinator:
      # pylint: disable=g-long-lambda
      cluster_train_function = lambda it: self._cluster_coordinator.schedule(
          train_function, args=(it,))

    self.train_function = cluster_train_function or train_function
    return self.train_function

This function also runs the train_step() function using the model's provided distribution strategy.

Next, we need to update our fit() method to utilize this new train function:

  def fit(self, dataset: tf.data.Dataset, epochs=1, verbose=1):
    """This simplified version of fit only accepts a TensorFlow dataset.

    Args:
      dataset: tf.data.Dataset, must yield a tuple of inputs and a one hot
        encoded vector containing labels
      epochs: number of passes to perform over the verbosity
      verbose: verbosity of logging during fit
    """
    if self.distribute_strategy._should_use_with_coordinator:  # pylint: disable=protected-access
      self._cluster_coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
          self.distribute_strategy)
      dataset = self._cluster_coordinator.create_per_worker_dataset(dataset)

    for epoch in range(epochs):
        iterator = iter(dataset)
        for step in range(0, steps_per_epoch, self.steps_per_execution):
          try:
            # returns {'loss': loss, 'metric1': val1, ...}
            metrics = self.train_function(iterator)
            metric_str = ', '.join(
              [f'{metric_name}: {val}' for metric_name, val in metrics.items()])
            print(f'\repoch: ({epoch+1}/{epochs}), {metric_str}', end='')
          except tf.errors.OutOfRangeError:
            break
        print()

Next, we will implement computation batching. By batching computation, we reduce the number of context transfers between the computation host and the python side callbacks. In the Keras model class this is done by the steps_per_execution parameter passed to the compile() method.

First, we need to update compile() to include this new parameter:

  def compile(self,
              optimizer='rmsprop',
              loss=None,
              metrics=None,
              loss_weights=None,
              weighted_metrics=None,
              steps_per_execution=1):
    # We need to compile the loss and metrics within the strategy scope
    with self.distribute_strategy.scope():
      self.steps_per_execution = steps_per_execution
      ...

Keras Callbacks

Keras callbacks are objects that perform actions at various stages of training. There is a large library of existing callbacks to handle things like:

  • Write TensorBoard logs after every batch of training to monitor your metrics
  • Periodically save your model to disk
  • Do early stopping
  • Get a view on internal states and statistics of a model during training
  • ...and more

You can read more about callbacks here: https://keras.io/api/callbacks/

Let's integrate Keras callbacks into our SimplifiedModel class.

In order to do so, we will need to add a callbacks parameter to our fit() method. Additionally, we will wrap the callbacks into a Keras CallbackList:

  def fit(
      self,
      dataset,
      epochs=1,
      verbose=1,
      steps_per_epoch=sys.maxsize,  # default to max to iterate entire dataset
      callbacks=None):
    """This simplified version of fit only accepts a TensorFlow dataset.

    Args:
      dataset: tf.data.Dataset, must yield a tuple of inputs and a one hot
        encoded vector containing labels
      epochs: number of passes to perform over the verbosity
      verbose: verbosity of logging during fit
      steps_per_epoch: number of steps that counts as an epoch, useful with
        endless datasets.  When using a finite dataset, leave as sys.maxsize.
      callbacks: list of Keras callbacks
    """
    callbacks = callbacks_module.CallbackList(
        callbacks,
        add_history=True,
        add_progbar=verbose != 0,
        model=self,
        verbose=verbose,
        epochs=epochs)

    dataset = self.distribute_strategy.experimental_distribute_dataset(dataset)

    if self.distribute_strategy._should_use_with_coordinator:  # pylint: disable=protected-access
      self._cluster_coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
          self.distribute_strategy)
      dataset = self._cluster_coordinator.create_per_worker_dataset(dataset)

    self.make_train_function()
    self._train_counter.assign(0)
    callbacks.on_train_begin()
    for epoch in range(epochs):
      iterator = iter(dataset)
      callbacks.on_epoch_begin(epoch)
      for step in range(0, steps_per_epoch, self.steps_per_execution):
        callbacks.on_train_batch_begin(step)
        try:
          # returns {'loss': loss, 'metric1': val1, ...}
          unused_metrics = self.train_function(iterator)
        except tf.errors.OutOfRangeError:
          break
        callbacks.on_train_batch_end(step)
      callbacks.on_epoch_end(epoch, None)

We can now pass any Keras callbacks to fit() and have it behave as expected. Additionally, we now get the Keras progress bar when running fit.

Final Usage, Recap

The final code for the SimplifiedModel class is available below:

"""SimplifiedModel is a barebones Keras model class.

The intended use of this class is for end users to fork this class and replace
`compile()`, `fit()` and `predict()` with their own logic.
"""
import sys

import tensorflow as tf


class SimplifiedModel(tf.keras.layers.Layer):
  """SimplifiedModel is a stripped down barebones version of keras.Model."""
  
  def __init__(self, *args, **kwargs):
    super(SimplifiedModel, self).__init__(*args, **kwargs)
    self.dense = tf.keras.layers.Dense(1)
    self.distribute_strategy = tf.distribute.get_strategy()
    agg = tf.VariableAggregation.ONLY_FIRST_REPLICA
    self._train_counter = tf.Variable(0, dtype='int64', aggregation=agg, trainable=False)
    self._cluster_coordinator = None
    if self.distribute_strategy._should_use_with_coordinator:
      self._cluster_coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
          self.distribute_strategy)

  def compile(self,
              optimizer='rmsprop',
              loss=None,
              metrics=None,
              loss_weights=None,
              weighted_metrics=None,
              steps_per_execution=1):
    # We need to compile the loss and metrics within the strategy scope
    with self.distribute_strategy.scope():
      self.optimizer = optimizers.get(optimizer)
      self.loss = loss
      self.metrics_list = metrics if isinstance(metrics, list) else [metrics]
      self.steps_per_execution = steps_per_execution
      self.train_function = None
      self._is_compiled = True

  def call(self, inputs):
    return self.dense(inputs)

  def predict_step(self, x):
    return self(x, training=False)

  def train_step(self, data):
    x, y = data
    # Run forward pass.
    with tf.GradientTape() as tape:
      y_pred = self(x, training=True)
      loss = self.loss(y, y_pred)
      for extra_loss in self.losses:
        loss += scale_loss_for_distribution(extra_loss)

    # Run backwards pass.
    self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
    # Collect metrics to return
    return_metrics = {'loss': loss}
    for metric in self.metrics:
      metric.update_state(y, y_pred, None)
      result = metric.result()
      if isinstance(result, dict):
        return_metrics.update(result)
      else:
        return_metrics[metric.name] = result
    return return_metrics

  def fit(
      self,
      dataset,
      epochs=1,
      verbose=1,
      steps_per_epoch=sys.maxsize,  # default to max to iterate entire dataset
      callbacks=None):
    """This simplified version of fit only accepts a TensorFlow dataset.

    Args:
      dataset: tf.data.Dataset, must yield a tuple of inputs and a one hot
        encoded vector containing labels
      epochs: number of passes to perform over the verbosity
      verbose: verbosity of logging during fit
      steps_per_epoch: number of steps that counts as an epoch, useful with
        endless datasets.  When using a finite dataset, leave as sys.maxsize.
      callbacks: list of Keras callbacks
    """
    callbacks = tf.keras.callbacks.CallbackList(
        callbacks,
        add_history=True,
        add_progbar=verbose != 0,
        model=self,
        verbose=verbose,
        epochs=epochs)

    dataset = self.distribute_strategy.experimental_distribute_dataset(dataset)

    if self.distribute_strategy._should_use_with_coordinator:  # pylint: disable=protected-access
      self._cluster_coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
          self.distribute_strategy)
      dataset = self._cluster_coordinator.create_per_worker_dataset(dataset)

    self.make_train_function()
    self._train_counter.assign(0)
    callbacks.on_train_begin()
    for epoch in range(epochs):
      iterator = iter(dataset)
      callbacks.on_epoch_begin(epoch)
      for step in range(0, steps_per_epoch, self.steps_per_execution):
        callbacks.on_train_batch_begin(step)
        try:
          # returns {'loss': loss, 'metric1': val1, ...}
          unused_metrics = self.train_function(iterator)
        except tf.errors.OutOfRangeError:
          break
        callbacks.on_train_batch_end(step)
      callbacks.on_epoch_end(epoch, None)

  def make_train_function(self):
    if self.train_function:
      return self.train_function

    def step_function(model, iterator):

      def run_step(data):
        outputs = model.train_step(data)
        model._train_counter.assign_add(1)  # pylint: disable=protected-access
        return outputs

      data = next(iterator)
      outputs = model.distribute_strategy.run(run_step, args=(data,))
      return model.distribute_strategy.unwrap(outputs)[0]

    def train_function(iterator):
      """Runs a training execution with multiple steps."""
      # Autograph cannot infer the return type of undeclared non-Tensor
      # variables from inside loops. The limitations documentation explains this
      # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md
      names = [m.name for m in self.metrics] + ['loss']
      outputs = dict.fromkeys(names, 0.)
      for _ in tf.range(self.steps_per_execution):
        outputs = step_function(self, iterator)
      return outputs

    train_function = tf.function(train_function, experimental_relax_shapes=True)

    # A separate function is needed to prevent self-referential
    # infinitely-recursive closures
    cluster_train_function = None
    if self._cluster_coordinator:
      # pylint: disable=g-long-lambda
      cluster_train_function = lambda it: self._cluster_coordinator.schedule(
          train_function, args=(it,))

    self.train_function = cluster_train_function or train_function
    return self.train_function

  def test_step(self, x, y):
    y_pred = self(x, training=False)
    loss = self.loss(y, y_pred)
    for extra_loss in self.losses:
      loss += scale_loss_for_distribution(extra_loss)

    return_metrics = {
        'loss': loss,
    }

    for metric in self.metrics:
      metric.update_state(y, y_pred, None)
      result = metric.result()
      if isinstance(result, dict):
        return_metrics.update(result)
      else:
        return_metrics[metric.name] = result
    return return_metrics

  def evaluate(self, dataset):
    self.reset_metrics()
    metrics_aggregate = []
    for xs, ys in dataset:
      self.reset_metrics()
      metrics_aggregate.append(self.test_step(xs, ys))

    if not metrics_aggregate:
      raise ValueError('dataset must contain at least one batch of samples.  '
                       f'Received: {dataset}')

    result = {}
    for k in metrics_aggregate[0]:
      result[k] = 0.

    for metric_iter in metrics_aggregate:
      for k, v in metric_iter.items():
        result[k] += v / len(metrics_aggregate)
    return result

  def predict(self, dataset):
    result = []
    for xs in dataset:
      result.append(self(xs, training=False))
    return tf.concat(result, axis=0)

  def reset_metrics(self):
    for metric in self.metrics_list:
      metric.reset_state()

Below is an example use of the SimplifiedModel class:

import numpy as np
import tensorflow.keras.losses as losses

try:
  strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
  print("Using OneDeviceStrategy")
except:
  strategy = tf.distribute.get_strategy()
  print("Using", strategy)

# Make sure to create all `tf.Variable`s under the `scope`.
with strategy.scope():
  model = SimplifiedModel()

  x, y = np.zeros((1000, 10)), np.ones((1000, 1))
  ds = tf.data.Dataset.from_tensor_slices((x, y))
  # Our model expects the dataset to be batched
  ds = ds.batch(10)

  ds_test = tf.data.Dataset.from_tensor_slices((x))
  ds_test = ds_test.batch(10)

  model.compile('sgd',
    loss=losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.SUM),
    metrics=[tf.keras.metrics.MeanAbsolutePercentageError()],
    steps_per_execution=5
  )
  metrics = model.evaluate(ds)
  print('Metrics Before Fit:', metrics)

  model.fit(ds, epochs=10, verbose=2)

  # we can go ahead and predict some values
  y_pred = model.predict(ds_test)
  print('Predictions Shape:', y_pred.shape)

  metrics = model.evaluate(ds)
  print('Metrics After:', metrics)
Using OneDeviceStrategy
Metrics Before Fit: {'loss': 
   
    , 'mean_absolute_percentage_error': 
    
     }
Epoch 1/10
101/101 - 1s - 538ms/epoch - 5ms/step
Epoch 2/10
101/101 - 0s - 214ms/epoch - 2ms/step
Epoch 3/10
101/101 - 0s - 234ms/epoch - 2ms/step
Epoch 4/10
101/101 - 0s - 223ms/epoch - 2ms/step
Epoch 5/10
101/101 - 0s - 221ms/epoch - 2ms/step
Epoch 6/10
101/101 - 0s - 223ms/epoch - 2ms/step
Epoch 7/10
101/101 - 0s - 220ms/epoch - 2ms/step
Epoch 8/10
101/101 - 0s - 213ms/epoch - 2ms/step
Epoch 9/10
101/101 - 0s - 248ms/epoch - 2ms/step
Epoch 10/10
101/101 - 0s - 251ms/epoch - 2ms/step
Predictions Shape: (1000, 1)
Metrics After: {'loss': 
     
      , 'mean_absolute_percentage_error': 
      
       }

      
     
    
   
%%timeit
model.fit(ds, epochs=10, verbose=2)
Epoch 1/10
101/101 - 0s - 482ms/epoch - 5ms/step
Epoch 2/10
101/101 - 0s - 276ms/epoch - 3ms/step
Epoch 3/10
101/101 - 0s - 255ms/epoch - 3ms/step
Epoch 4/10
101/101 - 0s - 224ms/epoch - 2ms/step
Epoch 5/10
101/101 - 0s - 234ms/epoch - 2ms/step
Epoch 6/10
101/101 - 0s - 221ms/epoch - 2ms/step
Epoch 7/10
101/101 - 0s - 249ms/epoch - 2ms/step
Epoch 8/10
101/101 - 0s - 244ms/epoch - 2ms/step
Epoch 9/10
101/101 - 0s - 240ms/epoch - 2ms/step
Epoch 10/10
101/101 - 0s - 245ms/epoch - 2ms/step
Epoch 1/10
101/101 - 0s - 219ms/epoch - 2ms/step
Epoch 2/10
101/101 - 0s - 225ms/epoch - 2ms/step
Epoch 3/10
101/101 - 0s - 243ms/epoch - 2ms/step
Epoch 4/10
101/101 - 0s - 222ms/epoch - 2ms/step
Epoch 5/10
101/101 - 0s - 225ms/epoch - 2ms/step
Epoch 6/10
101/101 - 0s - 280ms/epoch - 3ms/step
Epoch 7/10
101/101 - 0s - 272ms/epoch - 3ms/step
Epoch 8/10
101/101 - 0s - 272ms/epoch - 3ms/step
Epoch 9/10
101/101 - 0s - 262ms/epoch - 3ms/step
Epoch 10/10
101/101 - 0s - 244ms/epoch - 2ms/step
Epoch 1/10
101/101 - 0s - 229ms/epoch - 2ms/step
Epoch 2/10
101/101 - 0s - 278ms/epoch - 3ms/step
Epoch 3/10
101/101 - 0s - 260ms/epoch - 3ms/step
Epoch 4/10
101/101 - 0s - 278ms/epoch - 3ms/step
Epoch 5/10
101/101 - 0s - 270ms/epoch - 3ms/step
Epoch 6/10
101/101 - 0s - 249ms/epoch - 2ms/step
Epoch 7/10
101/101 - 0s - 244ms/epoch - 2ms/step
Epoch 8/10
101/101 - 0s - 272ms/epoch - 3ms/step
Epoch 9/10
101/101 - 0s - 269ms/epoch - 3ms/step
Epoch 10/10
101/101 - 0s - 225ms/epoch - 2ms/step
Epoch 1/10
101/101 - 0s - 240ms/epoch - 2ms/step
Epoch 2/10
101/101 - 0s - 275ms/epoch - 3ms/step
Epoch 3/10
101/101 - 0s - 233ms/epoch - 2ms/step
Epoch 4/10
101/101 - 0s - 225ms/epoch - 2ms/step
Epoch 5/10
101/101 - 0s - 239ms/epoch - 2ms/step
Epoch 6/10
101/101 - 0s - 246ms/epoch - 2ms/step
Epoch 7/10
101/101 - 0s - 227ms/epoch - 2ms/step
Epoch 8/10
101/101 - 0s - 262ms/epoch - 3ms/step
Epoch 9/10
101/101 - 0s - 245ms/epoch - 2ms/step
Epoch 10/10
101/101 - 0s - 230ms/epoch - 2ms/step
Epoch 1/10
101/101 - 0s - 246ms/epoch - 2ms/step
Epoch 2/10
101/101 - 0s - 237ms/epoch - 2ms/step
Epoch 3/10
101/101 - 0s - 227ms/epoch - 2ms/step
Epoch 4/10
101/101 - 0s - 224ms/epoch - 2ms/step
Epoch 5/10
101/101 - 0s - 250ms/epoch - 2ms/step
Epoch 6/10
101/101 - 0s - 247ms/epoch - 2ms/step
Epoch 7/10
101/101 - 0s - 224ms/epoch - 2ms/step
Epoch 8/10
101/101 - 0s - 240ms/epoch - 2ms/step
Epoch 9/10
101/101 - 0s - 274ms/epoch - 3ms/step
Epoch 10/10
101/101 - 0s - 261ms/epoch - 3ms/step
Epoch 1/10
101/101 - 0s - 268ms/epoch - 3ms/step
Epoch 2/10
101/101 - 0s - 268ms/epoch - 3ms/step
Epoch 3/10
101/101 - 0s - 266ms/epoch - 3ms/step
Epoch 4/10
101/101 - 0s - 266ms/epoch - 3ms/step
Epoch 5/10
101/101 - 0s - 281ms/epoch - 3ms/step
Epoch 6/10
101/101 - 0s - 270ms/epoch - 3ms/step
Epoch 7/10
101/101 - 0s - 263ms/epoch - 3ms/step
Epoch 8/10
101/101 - 0s - 251ms/epoch - 2ms/step
Epoch 9/10
101/101 - 0s - 260ms/epoch - 3ms/step
Epoch 10/10
101/101 - 0s - 271ms/epoch - 3ms/step
1 loop, best of 5: 3.02 s per loop

As you can see, model.fit() now runs significantly faster than it did before implementing our performance enhancements.

Conclusion

The keras.Model class contains a encapsulates set of functionality related to training. Due to this, the lifecycle of the keras.Model class is significantly complex.

The SimplifiedModel class we implemented shows how the core functionality of the keras.Model works, while still remaining terse and readable. while the SimplifiedModel class is missing a large portion of the true keras.Model class's functionality, it is still as useful starting point for implementing custom training loops that work with TensorFlow distribution strategies.

A forkable template using the SimplifiedModel class is available at https://github.com/lukewood/ModelWalkthrough.

Owner
Luke Wood
Keras team member & Machine Learning researcher @ Google, UCSD Ph.D student
Luke Wood
Machine Learning Time-Series Platform

cesium: Open-Source Platform for Time Series Inference Summary cesium is an open source library that allows users to: extract features from raw time s

632 Dec 26, 2022
A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

jedibobo 3 Dec 28, 2022
Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision. ICCV 2021.

Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision Download links and PyTorch implementation of "Towers of Ba

Blakey Wu 40 Dec 14, 2022
Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks

Uniformer - Pytorch Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification ta

Phil Wang 90 Nov 24, 2022
StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation

StyleGAN2 with adaptive discriminator augmentation (ADA) — Official TensorFlow implementation Training Generative Adversarial Networks with Limited Da

NVIDIA Research Projects 1.7k Dec 29, 2022
[ACMMM 2021, Oral] Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception"

EIP: Elastic Interaction of Particles Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception", in ACMMM (Oral) 2021. By Yikai

Yikai Wang 37 Dec 20, 2022
This project hosts the code for implementing the ISAL algorithm for object detection and image classification

Influence Selection for Active Learning (ISAL) This project hosts the code for implementing the ISAL algorithm for object detection and image classifi

25 Sep 11, 2022
MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions

MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions Project Page | Paper If you find our work useful for your research, please con

96 Jan 04, 2023
Library for fast text representation and classification.

fastText fastText is a library for efficient learning of word representations and sentence classification. Table of contents Resources Models Suppleme

Facebook Research 24.1k Jan 01, 2023
Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

An official implementation of paper Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

11 Nov 23, 2022
GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs

GraphLily: A Graph Linear Algebra Overlay on HBM-Equipped FPGAs GraphLily is the first FPGA overlay for graph processing. GraphLily supports a rich se

Cornell Zhang Research Group 39 Dec 13, 2022
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

AI Summer 962 Dec 23, 2022
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images This repository contains the implementation of our paper MetaAvatar: Learni

sfwang 96 Dec 13, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
NeoPlay is the project dedicated to ESport events.

NeoPlay is the project dedicated to ESport events. On this platform users can participate in tournaments with prize pools as well as create their own tournaments.

3 Dec 18, 2021
Tree Nested PyTorch Tensor Lib

DI-treetensor treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors. Almost all the operation can be supp

OpenDILab 167 Dec 29, 2022
Guided Internet-delivered Cognitive Behavioral Therapy Adherence Forecasting

Guided Internet-delivered Cognitive Behavioral Therapy Adherence Forecasting #Dataset The folder "Dataset" contains the dataset use in this work and m

0 Jan 08, 2022
Python/Rust implementations and notes from Proofs Arguments and Zero Knowledge

What is this? This is where I'll be collecting resources related to the Study Group on Dr. Justin Thaler's Proofs Arguments And Zero Knowledge Book. T

Thor 66 Jan 04, 2023
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

632 Dec 13, 2022
Tensorflow implementation of DeepLabv2

TF-deeplab This is a Tensorflow implementation of DeepLab, compatible with Tensorflow 1.2.1. Currently it supports both training and testing the ResNe

Chenxi Liu 21 Sep 27, 2022