ML models and internal tensors 3D visualizer

Related tags

Deep Learningviewer
Overview

logo

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to open the AI black box by visualizing and understanding the model's architecture and internal data (feature maps, weights, biases and layers output tensors). It can be thought of as a tool to do neuroimaging or brain imaging for artificial neural networks and machine learning algorithms.

You can also launch your own Zetane workspace directly from your existing scripts or notebooks via a few commands using the Zetane Python API.

nodes tensors


Zetane Viewer

Installation

You can install the free Zetane viewer for Windows, Linux and Mac, and explore ZTN and ONNX files.

Download for Windows

Download for Linux

Download for Mac

Tutorial

In this video, we will show you how to load a Zetane or ONNX model, navigate the model and view different tensors:

Below is the step-by-step instruction of how to load and inspect a model in the Zetane viewer:

  • How to load a model

The viewer supports both .ONNX and .ZTN files. The ZTN files were generated from the Keras and Pytorch scripts shared in this Git repository. After launching the viewer, to load a Zetane model, simply click “Load Zetane Model” in the DATA I/O menu. To load an Onnx model, click on “Import ONNX Model” in the same menu. Below you can access the ZTN files for a few models to load. You can also access ONNX files from the ONNX Model Zoo.

loading

When a model is displayed in the Zetane engine, any components of the model can be accessed in a few clicks.

At the highest level, we have the model architecture which is composed of interconnected nodes and tensors. Each node represents an operator of the computational graph. Usually, an input tensor is passed to the model and as it goes through the nodes it will be transformed into intermediate tensors until we reach the output tensor of the model. In the Zetane engine, the data flows from left to right.

architecture

  • How to navigate

You may navigate the model viewer window by right clicking and dragging to explore the space and using the scroll wheel to zoom in and out. Here is the complete list of navigation instructions. You can change the behavior of the mouse wheel (either to zoom or to navigate) via the Mouse Zoom toggle in the top menu.

zoom

  • Loading custom model inputs

After loading a model you may want to send your own inputs to the model to inference. Zetane supports loading .npy, .npz, .png, .jpg, .pb (protobuf), .tiff, and .hdr files that match the input dimensions of the model. The Zetane engine will attempt to intelligently resize the file loaded (if possible) in order to send the data to the model. After loading and running the input, you will be able to explore in detail how your model interpreted the input data.

nodes tensors tensors

  • How to inspect different layers and feature maps

For each layer, you have the option to view all the feature maps and filters by clicking on the “Show Feature Maps” on each node. You may inspect the inputs and outputs and weights and biases using the tensor view bar.

featuremap

  • Tensor view bar

By clicking on the associated button, you can visualize inputs, outputs, weights and biases (if applicable) for each individual layer. You can also investigate the shape, type, mean and standard deviation of each tensor.

tensorview

Statistics about the tensor value and its distribution is given in the histogram in the top panel. You can also see the tensor name and shape. The tensor and its values is represented in the middle panel and the bottom section contains tensor visualization parameters and a refresh button which allow the user to refresh the tensor. This is useful when the input or the weights are changing in real-time.

tensorpanel

  • Styles of tensor visualization

Tensors can be inspected in different ways, including 3D view and 2D view with and without actual values.

tensorview2



Tensor View Screenshot
N-dimensional tensor projected in the 3D space tensor_viz_3d
N-dimensional tensor projected in the 2D space tensor_viz_2d
Tensor values and color representations of each value based on the gradient shown on the x-axis of the distribution histogram tensor_viz_color-values
Tensor values__ tensor_viz_values
Feature maps view when the tensor has shape of dimension 3 tensor_viz_values

Models

We have generated a few ZTN models for inspecting their architecture and internal tensors in the viewer. We have also provided the code used to generate these models.

Image Classification

Object Detection

Image Segmentation

Body, Face and Gesture Analysis

Image Manipulation

XAI

Classic Machine Learning


Installation

Install the Zetane Viewer here.


Comments
  • BUG: Viewer crashes when loading any model

    BUG: Viewer crashes when loading any model

    I've tried loading multiple models including emotion-ferplus (both onnx and ztn formats) but they always immediately crash the viewer.

    OS: Ubuntu 20.04 Zetane 1.3.2 Dump:

    LoadUniverse(): ZTN_REQUIRE_LOGIN = 1 
    online = 0 
    ================== ExposeIRnodes: ================== 
    @@@ ExposeIRnodes() n_IR_outputs = 51. 
     <- [Parameter1367_reshape1. 
     <- [Minus340_Output_0. 
     <- [Block352_Output_0. 
     <- [Convolution362_Output_0. 
     <- [Plus364_Output_0. 
     <- [ReLU366_Output_0. 
     <- [Convolution380_Output_0. 
     <- [Plus382_Output_0. 
     <- [ReLU384_Output_0. 
     <- [Pooling398_Output_0. 
     <- [Dropout408_Output_0. 
     <- [Convolution418_Output_0. 
     <- [Plus420_Output_0. 
     <- [ReLU422_Output_0. 
     <- [Convolution436_Output_0. 
     <- [Plus438_Output_0. 
     <- [ReLU440_Output_0. 
     <- [Pooling454_Output_0. 
     <- [Dropout464_Output_0. 
     <- [Convolution474_Output_0. 
     <- [Plus476_Output_0. 
     <- [ReLU478_Output_0. 
     <- [Convolution492_Output_0. 
     <- [Plus494_Output_0. 
     <- [ReLU496_Output_0. 
     <- [Convolution510_Output_0. 
     <- [Plus512_Output_0. 
     <- [ReLU514_Output_0. 
     <- [Pooling528_Output_0. 
     <- [Dropout538_Output_0. 
     <- [Convolution548_Output_0. 
     <- [Plus550_Output_0. 
     <- [ReLU552_Output_0. 
     <- [Convolution566_Output_0. 
     <- [Plus568_Output_0. 
     <- [ReLU570_Output_0. 
     <- [Convolution584_Output_0. 
     <- [Plus586_Output_0. 
     <- [ReLU588_Output_0. 
     <- [Pooling602_Output_0. 
     <- [Dropout612_Output_0. 
     <- [Dropout612_Output_0_reshape0. 
     <- [Times622_Output_0. 
     <- [Plus624_Output_0. 
     <- [ReLU636_Output_0. 
     <- [Dropout646_Output_0. 
     <- [Times656_Output_0. 
     <- [Plus658_Output_0. 
     <- [ReLU670_Output_0. 
     <- [Dropout680_Output_0. 
     <- [Times690_Output_0. 
    node_name = Node_0000000000_Times622_reshape1_Reshape. 
     -> [Parameter1367_reshape1. 
    node_name = Node_0000000001_Minus340_Sub. 
     -> [Minus340_Output_0. 
    node_name = Node_0000000002_Block352_Div. 
     -> [Block352_Output_0. 
    node_name = Node_0000000003_Convolution362_Conv. 
     -> [Convolution362_Output_0. 
    node_name = Node_0000000004_Plus364_Add. 
     -> [Plus364_Output_0. 
    node_name = Node_0000000005_ReLU366_Relu. 
     -> [ReLU366_Output_0. 
    node_name = Node_0000000006_Convolution380_Conv. 
     -> [Convolution380_Output_0. 
    node_name = Node_0000000007_Plus382_Add. 
     -> [Plus382_Output_0. 
    node_name = Node_0000000008_ReLU384_Relu. 
     -> [ReLU384_Output_0. 
    node_name = Node_0000000009_Pooling398_MaxPool. 
     -> [Pooling398_Output_0. 
    node_name = Node_0000000010_Dropout408_Dropout. 
     -> [Dropout408_Output_0. 
    node_name = Node_0000000011_Convolution418_Conv. 
     -> [Convolution418_Output_0. 
    node_name = Node_0000000012_Plus420_Add. 
     -> [Plus420_Output_0. 
    node_name = Node_0000000013_ReLU422_Relu. 
     -> [ReLU422_Output_0. 
    node_name = Node_0000000014_Convolution436_Conv. 
     -> [Convolution436_Output_0. 
    node_name = Node_0000000015_Plus438_Add. 
     -> [Plus438_Output_0. 
    node_name = Node_0000000016_ReLU440_Relu. 
     -> [ReLU440_Output_0. 
    node_name = Node_0000000017_Pooling454_MaxPool. 
     -> [Pooling454_Output_0. 
    node_name = Node_0000000018_Dropout464_Dropout. 
     -> [Dropout464_Output_0. 
    node_name = Node_0000000019_Convolution474_Conv. 
     -> [Convolution474_Output_0. 
    node_name = Node_0000000020_Plus476_Add. 
     -> [Plus476_Output_0. 
    node_name = Node_0000000021_ReLU478_Relu. 
     -> [ReLU478_Output_0. 
    node_name = Node_0000000022_Convolution492_Conv. 
     -> [Convolution492_Output_0. 
    node_name = Node_0000000023_Plus494_Add. 
     -> [Plus494_Output_0. 
    node_name = Node_0000000024_ReLU496_Relu. 
     -> [ReLU496_Output_0. 
    node_name = Node_0000000025_Convolution510_Conv. 
     -> [Convolution510_Output_0. 
    node_name = Node_0000000026_Plus512_Add. 
     -> [Plus512_Output_0. 
    node_name = Node_0000000027_ReLU514_Relu. 
     -> [ReLU514_Output_0. 
    node_name = Node_0000000028_Pooling528_MaxPool. 
     -> [Pooling528_Output_0. 
    node_name = Node_0000000029_Dropout538_Dropout. 
     -> [Dropout538_Output_0. 
    node_name = Node_0000000030_Convolution548_Conv. 
     -> [Convolution548_Output_0. 
    node_name = Node_0000000031_Plus550_Add. 
     -> [Plus550_Output_0. 
    node_name = Node_0000000032_ReLU552_Relu. 
     -> [ReLU552_Output_0. 
    node_name = Node_0000000033_Convolution566_Conv. 
     -> [Convolution566_Output_0. 
    node_name = Node_0000000034_Plus568_Add. 
     -> [Plus568_Output_0. 
    node_name = Node_0000000035_ReLU570_Relu. 
     -> [ReLU570_Output_0. 
    node_name = Node_0000000036_Convolution584_Conv. 
     -> [Convolution584_Output_0. 
    node_name = Node_0000000037_Plus586_Add. 
     -> [Plus586_Output_0. 
    node_name = Node_0000000038_ReLU588_Relu. 
     -> [ReLU588_Output_0. 
    node_name = Node_0000000039_Pooling602_MaxPool. 
     -> [Pooling602_Output_0. 
    node_name = Node_0000000040_Dropout612_Dropout. 
     -> [Dropout612_Output_0. 
    node_name = Node_0000000041_Times622_reshape0_Reshape. 
     -> [Dropout612_Output_0_reshape0. 
    node_name = Node_0000000042_Times622_MatMul. 
     -> [Times622_Output_0. 
    node_name = Node_0000000043_Plus624_Add. 
     -> [Plus624_Output_0. 
    node_name = Node_0000000044_ReLU636_Relu. 
     -> [ReLU636_Output_0. 
    node_name = Node_0000000045_Dropout646_Dropout. 
     -> [Dropout646_Output_0. 
    node_name = Node_0000000046_Times656_MatMul. 
     -> [Times656_Output_0. 
    node_name = Node_0000000047_Plus658_Add. 
     -> [Plus658_Output_0. 
    node_name = Node_0000000048_ReLU670_Relu. 
     -> [ReLU670_Output_0. 
    node_name = Node_0000000049_Dropout680_Dropout. 
     -> [Dropout680_Output_0. 
    node_name = Node_0000000050_Times690_MatMul. 
     -> [Times690_Output_0. 
    node_name = Node_0000000051_Plus692_Add. 
    @@@ ExposeIRnodes() [Outputs] = 1 --> 52. 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ***************** ValidateIRnodes: ***************** 
    ====================================  
    input_dims = [ 1, 1, 64, 64, ]. 
    --> input 0[Input3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 1[Constant339]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 2[Constant343]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 3, 3, ]. 
    --> input 3[Parameter3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 4[Parameter4]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 64, 64, 3, 3, ]. 
    --> input 5[Parameter23]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 6[Parameter24]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 64, 3, 3, ]. 
    --> input 7[Parameter63]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 8[Parameter64]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 128, 3, 3, ]. 
    --> input 9[Parameter83]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 10[Parameter84]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 128, 3, 3, ]. 
    --> input 11[Parameter575]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 12[Parameter576]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 13[Parameter595]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 14[Parameter596]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 15[Parameter615]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 16[Parameter616]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 17[Parameter655]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 18[Parameter656]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 19[Parameter675]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 20[Parameter676]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 21[Parameter695]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 22[Parameter696]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 23[Dropout612_Output_0_reshape0_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 256, 4, 4, 1024, ]. 
    --> input 24[Parameter1367]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 25[Parameter1367_reshape1_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 26[Parameter1368]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 1024, ]. 
    --> input 27[Parameter1403]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 28[Parameter1404]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 8, ]. 
    --> input 29[Parameter1693]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 8, ]. 
    --> input 30[Parameter1694]: Type 1; [1 dims] tensor  
    --------------- 
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    output_dims = [ 1, 8, ]. 
    --> output 0/1[Plus692_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 4096, 1024, ]. 
    --> output 1/1[Parameter1367_reshape1]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 2/1[Minus340_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 3/1[Block352_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 4/1[Convolution362_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 5/1[Plus364_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 6/1[ReLU366_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 7/1[Convolution380_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 8/1[Plus382_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 9/1[ReLU384_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 10/1[Pooling398_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 11/1[Dropout408_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 12/1[Convolution418_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 13/1[Plus420_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 14/1[ReLU422_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 15/1[Convolution436_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 16/1[Plus438_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 17/1[ReLU440_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 18/1[Pooling454_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 19/1[Dropout464_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 20/1[Convolution474_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 21/1[Plus476_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 22/1[ReLU478_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 23/1[Convolution492_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 24/1[Plus494_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 25/1[ReLU496_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 26/1[Convolution510_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 27/1[Plus512_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 28/1[ReLU514_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 29/1[Pooling528_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 30/1[Dropout538_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 31/1[Convolution548_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 32/1[Plus550_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 33/1[ReLU552_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 34/1[Convolution566_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 35/1[Plus568_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 36/1[ReLU570_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 37/1[Convolution584_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 38/1[Plus586_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 39/1[ReLU588_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 40/1[Pooling602_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 41/1[Dropout612_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 4096, ]. 
    --> output 42/1[Dropout612_Output_0_reshape0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 43/1[Times622_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 44/1[Plus624_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 45/1[ReLU636_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 46/1[Dropout646_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 47/1[Times656_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 48/1[Plus658_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 49/1[ReLU670_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 50/1[Dropout680_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 8, ]. 
    --> output 51/1[Times690_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 
    ValidateIRnodes() 52 --> 52=52=52 valid output tensors  
    --------------- 
    ----------------- ValidateIRnodes. ----------------- 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    *** ExposeIRnodes: 1 --> 52 Outputs. 
    *** type: [FLOAT] ~?= STRING 
    TVZ10()  input_dims = [ 1, 1, 64, 64, ]. 
    --> input [0]: Type FLOAT; [4 dims] tensor  
    --------------- 
    Warning: Could not load "/opt/zetane/lib/graphviz/libgvplugin_pango.so.6" - file not found
    terminate called after throwing an instance of 'std::invalid_argument'
      what():  stod
    /usr/bin/zetane: line 26: 37102 Aborted                 (core dumped) ./Zetane --server
    
    
    opened by paulgavrikov 4
  • Free Trial not Available

    Free Trial not Available

    After clicking "upgrade 2 pro", I arrive at your pricing page. Clicking on "free trial" redirects me to the documentation, which instructs me to click the button "upgrade 2 pro". Now I'm stuck in an infinite loop and unhappy about it.

    I'd like to successfully exit this loop and try your product. Any Tips?

    opened by Whadup 3
  • sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    (base) [email protected]:~/下载$ sudo dpkg -i Zetane-1.7.0.deb (正在读取数据库 ... 系统当前共安装有 330200 个文件和目录。) 准备解压 Zetane-1.7.0.deb ... 正在解压 zetane (1.7.0) 并覆盖 (1.7.0) ... 正在设置 zetane (1.7.0) ... 正在处理用于 gnome-menus (3.36.0-1ubuntu1) 的触发器 ... 正在处理用于 desktop-file-utils (0.24-1ubuntu3) 的触发器 ... 正在处理用于 mime-support (3.64ubuntu1) 的触发器 ... 正在处理用于 hicolor-icon-theme (0.17-2) 的触发器 ...

    opened by mathpopo 2
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was launched not shown anything

    OS: Windows 10.0 Zetane 1.7.0

    Console output:

    Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! running process: /usr/bin/zetane --server 127.0.0.1 --port 4004 Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Connected to Zetane Engine! image

    opened by wftubby 0
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was not launched but keep printing "Dialing Zetane"

    OS: Ubuntu 18.04 Zetane 1.7.0

    Console output:

    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    running process: /usr/bin/zetane --server 127.0.0.1 --port 4004
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    
    
    opened by akzing-hz 6
Releases(v1.7.4)
  • v1.7.4(Jun 1, 2022)

    Viewer Engine

    • Added support for ONNX 1.10.2
    • Added support for ONNX Runtime 1.10.0
    • Added support for Keras/TensorFlow 2.9.1
    • Improved progress notifications when loading Keras models
    • Fixed crash cause by nested Keras models.
    • Reduced Tensor viewer memory usage
    • Dropped support for Ubuntu 16.04 LTS. See the up-to-date Minimum Requirements.
    • Deprecated support for macOS 10.14 Mojave

    API

    • Added the Zetane API context manager to automate view updates and cleanup, resulting in less verbose code.
    • Added support for Python 3.9
    • Dropped support for Python 3.6
    • Fixed protobuf dependency versioning
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.7.4.deb(273.45 MB)
    Zetane-1.7.4.dmg(312.91 MB)
    Zetane-1.7.4.msi(300.01 MB)
  • 1.7.0(Nov 15, 2021)

  • 1.6.2(Sep 22, 2021)

    • Added output blocks for models to prevent navigation to the end of the model graph
    • Added a Top-K output view for tensors that match certain shapes, e.g. (1, N). Classification models now have a more human understandable output.
    • Update to onnxruntime 1.8.1 to support latest ONNX opset.
    • Improve autodetection of input shapes to allow more inputs to pass inference without shape errors.
    • Fixes for RAM overuse
    • Fixes for Mesh API
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.6.2.deb(347.23 MB)
    Zetane-1.6.2.dmg(326.94 MB)
    Zetane-1.6.2.msi(301.44 MB)
  • 1.5.0(Jun 16, 2021)

  • 1.4.0(May 26, 2021)

  • 1.3.0(Apr 21, 2021)

    • When ONNX models are loaded, an inference pass with sample data is run by default. That means all tensors / feature maps / weights / biases should be viewable immediately after input load. Please let us know if there are models that don't succeed at this initial pass so we can fix them!
    Screen Shot 2021-04-21 at 11 55 29 AM

    (PRO) User input nodes are now attached to the model architecture diagram. When using Zetane Viewer Pro ($15/month) you can load custom inputs and send them through the model. Currently supported formats are .npy, .npz, .pb, and the majority of image formats (jpg, png, tiff, hdr, pic). Screen Shot 2021-04-21 at 11 53 16 AM Screen Shot 2021-04-21 at 12 10 19 PM Screen Shot 2021-04-21 at 11 54 01 AM

    (PRO) When user inputs are misshapen, the engine will display an error about the model's shape expectation. Note that this feature is also usable by free users without the error popup, the input node will load the user input and show dimensions before attempting to run inference with the model. Screen Shot 2021-04-21 at 11 59 54 AM Screen Shot 2021-04-21 at 12 00 10 PM

    (PRO) Any errors during model inference will also appear in the UI. An example is the shape error above. Individual graph operations may fail at any point during the inference pass-- the engine will attempt to populate the graph outputs up until the point of the error, a stack trace of the model run.

    As always, we welcome feedback, bug reports, and any suggestions you might have.

    Source code(tar.gz)
    Source code(zip)
    Zetane-1.3.0.deb(395.53 MB)
    Zetane-1.3.0.dmg(451.85 MB)
    Zetane-1.3.0.msi(452.15 MB)
  • 1.2.0(Apr 5, 2021)

    • Shape mismatch errors for running model inference are shown in the UI, describing the expected input and the given input. (PRO)
    Screen Shot 2021-04-05 at 12 02 59 PM
    • Changed default UI interaction with a mouse wheel to zoom by default, right click to drag the UI.
    • Panels now scroll or move on hover, not just after being selected.
    • Tensor viewer displays the original shape from file or API, without reordering the dimensions to fit the view panel.
    • User notification for version upgrade now appears in the UI.
    • Mac / Linux now run in API mode by default.
    • Added a new ZTN snapshot for XAI features.
    • User inputs now show above the Model Explorer panel's input node.
    • A number of bug fixes and performance improvements
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.2.0.deb(373.68 MB)
    Zetane-1.2.0.dmg(451.16 MB)
    Zetane-1.2.0.msi(438.49 MB)
  • 1.1.4(Feb 22, 2021)

Owner
Zetane Systems
Zetane Systems
Repository for MeshTalk supplemental material and code once the (already approved) 16 GHS captures our lab will make publicly available are released.

meshtalk This repository contains code to run MeshTalk for face animation from audio. If you use MeshTalk, please cite @inproceedings{richard2021mesht

Meta Research 221 Jan 06, 2023
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

Google Research 66 Oct 07, 2022
BoxInst: High-Performance Instance Segmentation with Box Annotations

Introduction This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge, the paper is BoxInst: High-Performan

88 Dec 21, 2022
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 501 Dec 28, 2022
Multi agent DDPG algorithm written in Python + Pytorch

Multi agent DDPG algorithm written in Python + Pytorch. It also includes a Jupyter notebook, Tennis.ipynb, as a showcase.

Rogier Wachters 2 Feb 26, 2022
A rule-based log analyzer & filter

Flog 一个根据规则集来处理文本日志的工具。 前言 在日常开发过程中,由于缺乏必要的日志规范,导致很多人乱打一通,一个日志文件夹解压缩后往往有几十万行。 日志泛滥会导致信息密度骤减,给排查问题带来了不小的麻烦。 以前都是用grep之类的工具先挑选出有用的,再逐条进行排查,费时费力。在忍无可忍之后决

上山打老虎 9 Jun 23, 2022
Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering

Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering

Meng Liu 2 Jul 19, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks This is the official repository for our paper: Sharpness-aware Quantization for Deep Neural Netw

Zhuang AI Group 30 Dec 19, 2022
Pneumonia Detection using machine learning - with PyTorch

Pneumonia Detection Pneumonia Detection using machine learning. Training was done in colab: DEMO: Result (Confusion Matrix): Data I uploaded my datase

Wilhelm Berghammer 12 Jul 07, 2022
A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

Hao Ren 8 Dec 23, 2022
Official implementation of "Implicit Neural Representations with Periodic Activation Functions"

Implicit Neural Representations with Periodic Activation Functions Project Page | Paper | Data Vincent Sitzmann*, Julien N. P. Martel*, Alexander W. B

Vincent Sitzmann 1.4k Jan 06, 2023
The world's simplest facial recognition api for Python and the command line

Face Recognition You can also read a translated version of this file in Chinese 简体中文版 or in Korean 한국어 or in Japanese 日本語. Recognize and manipulate fa

Adam Geitgey 46.9k Jan 03, 2023
A Transformer-Based Siamese Network for Change Detection

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022) Wele Gedara Chaminda Bandara, Vishal M. Patel Her

Wele Gedara Chaminda Bandara 214 Dec 29, 2022
[ECCV 2020] Gradient-Induced Co-Saliency Detection

Gradient-Induced Co-Saliency Detection Zhao Zhang*, Wenda Jin*, Jun Xu, Ming-Ming Cheng ⭐ Project Home » The official repo of the ECCV 2020 paper Grad

Zhao Zhang 35 Nov 25, 2022
Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

NeuLab 40 Dec 23, 2022
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

MGANs Training & Testing code (torch), pre-trained models and supplementary materials for "Precomputed Real-Time Texture Synthesis with Markovian Gene

290 Nov 15, 2022
implicit displacement field

Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields [project page][paper][cite] Geometry-Consistent Neural Shape Represe

Yifan Wang 100 Dec 19, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 04, 2023
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022