Pretty Tensor - Fluent Neural Networks in TensorFlow

Overview

Pretty Tensor - Fluent Neural Networks in TensorFlow

Pretty Tensor provides a high level builder API for TensorFlow. It provides thin wrappers on Tensors so that you can easily build multi-layer neural networks.

Pretty Tensor provides a set of objects that behave likes Tensors, but also support a chainable object syntax to quickly define neural networks and other layered architectures in TensorFlow.

result = (pretty_tensor.wrap(input_data, m)
          .flatten()
          .fully_connected(200, activation_fn=tf.nn.relu)
          .fully_connected(10, activation_fn=None)
          .softmax(labels, name=softmax_name))

Please look here for full documentation of the PrettyTensor object for all available operations: Available Operations or you can check out the complete documentation

See the tutorial directory for samples: tutorial/

Installation

The easiest installation is just to use pip:

  1. Follow the instructions at tensorflow.org
  2. pip install prettytensor

Note: Head is tested against the TensorFlow nightly builds and pip is tested against TensorFlow release.

Quick start

Imports

import prettytensor as pt
import tensorflow as tf

Setup your input

my_inputs = # numpy array of shape (BATCHES, BATCH_SIZE, DATA_SIZE)
my_labels = # numpy array of shape (BATCHES, BATCH_SIZE, CLASSES)
input_tensor = tf.placeholder(np.float32, shape=(BATCH_SIZE, DATA_SIZE))
label_tensor = tf.placeholder(np.float32, shape=(BATCH_SIZE, CLASSES))
pretty_input = pt.wrap(input_tensor)

Define your model

softmax, loss = (pretty_input.
                 fully_connected(100).
                 softmax_classifier(CLASSES, labels=label_tensor))

Train and evaluate

accuracy = softmax.evaluate_classifier(label_tensor)

optimizer = tf.train.GradientDescentOptimizer(0.1)  # learning rate
train_op = pt.apply_optimizer(optimizer, losses=[loss])

init_op = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init_op)
    for inp, label in zip(my_inputs, my_labels):
        unused_loss_value, accuracy_value = sess.run([loss, accuracy],
                                 {input_tensor: inp, label_tensor: label})
        print 'Accuracy: %g' % accuracy_value

Features

Thin

Full power of TensorFlow is easy to use

Pretty Tensors can be used (almost) everywhere that a tensor can. Just call pt.wrap to make a tensor pretty.

You can also add any existing TensorFlow function to the chain using apply. apply applies the current Tensor as the first argument and takes all the other arguments as normal.

Note: because apply is so generic, Pretty Tensor doesn't try to wrap the world.

Plays well with other libraries

It also uses standard TensorFlow idioms so that it plays well with other libraries, this means that you can use it a little bit in a model or throughout. Just make sure to run the update_ops on each training set (see with_update_ops).

Terse

You've already seen how a Pretty Tensor is chainable and you may have noticed that it takes care of handling the input shape. One other feature worth noting are defaults. Using defaults you can specify reused values in a single place without having to repeat yourself.

with pt.defaults_scope(activation_fn=tf.nn.relu):
  hidden_output2 = (pretty_images.flatten()
                   .fully_connected(100)
                   .fully_connected(100))

Check out the documentation to see all supported defaults.

Code matches model

Sequential mode lets you break model construction across lines and provides the subdivide syntactic sugar that makes it easy to define and understand complex structures like an inception module:

with pretty_tensor.defaults_scope(activation_fn=tf.nn.relu):
  seq = pretty_input.sequential()
  with seq.subdivide(4) as towers:
    towers[0].conv2d(1, 64)
    towers[1].conv2d(1, 112).conv2d(3, 224)
    towers[2].conv2d(1, 32).conv2d(5, 64)
    towers[3].max_pool(2, 3).conv2d(1, 32)

Inception module showing branch and rejoin

Templates provide guaranteed parameter reuse and make unrolling recurrent networks easy:

output = [], s = tf.zeros([BATCH, 256 * 2])

A = (pretty_tensor.template('x')
     .lstm_cell(num_units=256, state=UnboundVariable('state'))

for x in pretty_input_array:
  h, s = A.construct(x=x, state=s)
  output.append(h)

There are also some convenient shorthands for LSTMs and GRUs:

pretty_input_array.sequence_lstm(num_units=256)

Unrolled RNN

Extensible

You can call any existing operation by using apply and it will simply subsitute the current tensor for the first argument.

pretty_input.apply(tf.mul, 5)

You can also create a new operation There are two supported registration mechanisms to add your own functions. @Register() allows you to create a method on PrettyTensor that operates on the Tensors and returns either a loss or a new value. Name scoping and variable scoping are handled by the framework.

The following method adds the leaky_relu method to every Pretty Tensor:

@pt.Register
def leaky_relu(input_pt):
  return tf.select(tf.greater(input_pt, 0.0), input_pt, 0.01 * input_pt)

@RegisterCompoundOp() is like adding a macro, it is designed to group together common sets of operations.

Safe variable reuse

Within a graph, you can reuse variables by using templates. A template is just like a regular graph except that some variables are left unbound.

See more details in PrettyTensor class.

Accessing Variables

Pretty Tensor uses the standard graph collections from TensorFlow to store variables. These can be accessed using tf.get_collection(key) with the following keys:

  • tf.GraphKeys.VARIABLES: all variables that should be saved (including some statistics).
  • tf.GraphKeys.TRAINABLE_VARIABLES: all variables that can be trained (including those before a stop_gradients` call). These are what would typically be called parameters of the model in ML parlance.
  • pt.GraphKeys.TEST_VARIABLES: variables used to evaluate a model. These are typically not saved and are reset by the LocalRunner.evaluate method to get a fresh evaluation.

Authors

Eider Moore (eiderman)

with key contributions from:

  • Hubert Eichner
  • Oliver Lange
  • Sagar Jain (sagarjn)
Owner
Google
Google ❤️ Open Source
Google
DLL: Direct Lidar Localization

DLL: Direct Lidar Localization Summary This package presents DLL, a direct map-based localization technique using 3D LIDAR for its application to aeri

Service Robotics Lab 127 Dec 16, 2022
Optimal space decomposition based-product quantization for approximate nearest neighbor search

Optimal space decomposition based-product quantization for approximate nearest neighbor search Abstract Product quantization(PQ) is an effective neare

Mylove 1 Nov 19, 2021
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these enviro

Google 1.5k Jan 02, 2023
A cool little repl-based simulation written in Python

A cool little repl-based simulation written in Python planned to integrate machine-learning into itself to have AI battle to the death before your eye

Em 6 Sep 17, 2022
Efficient Householder transformation in PyTorch

Efficient Householder Transformation in PyTorch This repository implements the Householder transformation algorithm for calculating orthogonal matrice

Anton Obukhov 49 Nov 20, 2022
Vanilla and Prototypical Networks with Random Weights for image classification on Omniglot and mini-ImageNet. Made with Python3.

vanilla-rw-protonets-project Vanilla Prototypical Networks and PNs with Random Weights for image classification on Omniglot and mini-ImageNet. Made wi

Giovani Candido 8 Aug 31, 2022
Yolov3 pytorch implementation

YOLOV3 Pytorch实现 在bubbliiing大佬代码的基础上进行了修改,添加了部分注释。 预训练模型 预训练模型来源于bubbliiing。 链接:https://pan.baidu.com/s/1ncREw6Na9ycZptdxiVMApw 提取码:appk 训练自己的数据集 按照VO

4 Aug 27, 2022
A simple program for training and testing vit

Vit This is a simple program for training and testing vit. Key requirements: torch, torchvision and timm. Dataset I put 5 categories of the cub classi

xiezhenyu 2 Oct 11, 2022
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 03, 2023
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
ICNet and PSPNet-50 in Tensorflow for real-time semantic segmentation

Real-Time Semantic Segmentation in TensorFlow Perform pixel-wise semantic segmentation on high-resolution images in real-time with Image Cascade Netwo

Oles Andrienko 219 Nov 21, 2022
Repository for MDPGT

MD-PGT Repository for implementing and reproducing the results for the paper MDPGT: Momentum-based Decentralized Policy Gradient Tracking. Available E

Xian Yeow Lee 2 Dec 30, 2021
Code for Overinterpretation paper Overinterpretation reveals image classification model pathologies

Overinterpretation This repository contains the code for the paper: Overinterpretation reveals image classification model pathologies Authors: Brandon

Gifford Lab, MIT CSAIL 17 Dec 10, 2022
Tensorflow/Keras Plug-N-Play Deep Learning Models Compilation

DeepBay This project was created with the objective of compile Machine Learning Architectures created using Tensorflow or Keras. The architectures mus

Whitman Bohorquez 4 Sep 26, 2022
Inteligência artificial criada para realizar interação social com idosos.

IA SONIA 4.0 A SONIA foi inspirada no assistente mais famoso do mundo e muito bem conhecido JARVIS. Todo mundo algum dia ja sonhou em ter o seu própri

Vinícius Azevedo 2 Oct 21, 2021
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
Learning with Subset Stacking

Learning with Subset Stacking (LESS) LESS is a new supervised learning algorithm that is based on training many local estimators on subsets of a given

S. Ilker Birbil 19 Oct 04, 2022
This repository contains pre-trained models and some evaluation code for our paper Towards Unsupervised Dense Information Retrieval with Contrastive Learning

Contriever: Towards Unsupervised Dense Information Retrieval with Contrastive Learning This repository contains pre-trained models and some evaluation

Meta Research 207 Jan 08, 2023
Repository for the paper "From global to local MDI variable importances for random forests and when they are Shapley values"

From global to local MDI variable importances for random forests and when they are Shapley values Antonio Sutera ( Antonio Sutera 3 Feb 23, 2022