Visualizer for neural network, deep learning, and machine learning models

Overview

Netron is a viewer for neural network, deep learning and machine learning models.

Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), TensorFlow Lite (.tflite), Caffe (.caffemodel, .prototxt), Darknet (.cfg), Core ML (.mlmodel), MNN (.mnn), MXNet (.model, -symbol.json), ncnn (.param), PaddlePaddle (.zip, __model__), Caffe2 (predict_net.pb), Barracuda (.nn), Tengine (.tmfile), TNN (.tnnproto), RKNN (.rknn), MindSpore Lite (.ms), UFF (.uff).

Netron has experimental support for TensorFlow (.pb, .meta, .pbtxt, .ckpt, .index), PyTorch (.pt, .pth), TorchScript (.pt, .pth), OpenVINO (.xml), Torch (.t7), Arm NN (.armnn), BigDL (.bigdl, .model), Chainer (.npz, .h5), CNTK (.model, .cntk), Deeplearning4j (.zip), MediaPipe (.pbtxt), ML.NET (.zip), scikit-learn (.pkl), TensorFlow.js (model.json, .pb).

Install

macOS: Download the .dmg file or run brew install netron

Linux: Download the .AppImage file or run snap install netron

Windows: Download the .exe installer or run winget install netron

Browser: Start the browser version.

Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').

Models

Sample model files to download or open using the browser version:

Comments
  • Windows app not closing properly

    Windows app not closing properly

    After the latest update, Netron remains open taking up memory and CPU after closing the program. I must close it through task manager each time. I am on Windows 10

    no repro 
    opened by idenc 22
  • TorchScript: ValueError: not enough values to unpack

    TorchScript: ValueError: not enough values to unpack

    • Netron app and version: web app 5.5.9?
    • OS and browser version: Manjaro GNOME on firefox 97.0.1

    Steps to Reproduce:

    1. use torch.broadcast_tensors
    2. export with torch.trace(...).save()
    3. open in netron.app

    I have also gotten a Unsupported function 'torch.broadcast_tensors', but have been unable to reproduce it due to this current error. Most likely, the fix for the following repro will cover two bugs.

    Please attach or link model files to reproduce the issue if necessary.

    image

    Repro:

    import torch
    
    class Test(torch.nn.Module):
        def forward(self, a, b):
            a, b = torch.broadcast_tensors(a, b)
            assert a.shape == b.shape == (3, 5)
            return a + b
    
    torch.jit.trace(
        Test(),
        (torch.ones(3, 1), torch.ones(1, 5)),
    ).save("foobar.pt")
    

    Zipped foobar.pt: foobar.zip

    help wanted bug 
    opened by pbsds 15
  • OpenVINO support

    OpenVINO support

    • [x] 1. Opening rm_lstm4f.xml results in TypeError (#192)
    • [x] 2. dot files are not opened any more - need to fix it (#192)
    • [x] 3. add preflight check for invalid xml and dot content
    • [x] 6. Add test files to ./test/models.json (#195) (#211)
    • [x] 9. Add support for the version 3 of IR (#196)
    • [x] 10. Category color support (#203)
    • [x] 11. -metadata.json for coloring, documentation and attribute default filtering (#203).
    • [x] 5. Filter attribute defaults based on -metadata.json to show fewer attributes in the graph
    • [ ] 7. Show weight tensors
    • [x] 8. Graph inputs and outputs should be exposed as Graph.inputs and Graph.outputs
    • [x] 12. Move to DOMParser
    • [x] 13. Remove dot support
    feature 
    opened by lutzroeder 15
  • RangeError: Maximum call stack size exceeded

    RangeError: Maximum call stack size exceeded

    • Netron app and version: 4.4.8 App and Browser
    • OS and browser version: Windows 10 + Chrome Version 84.0.4147.135

    Steps to Reproduce:

    EfficientDet-d0.zip

    Please attach or link model files to reproduce the issue if necessary.

    help wanted no repro bug 
    opened by ryusaeba 14
  • Debugging Tensorflow Lite Model

    Debugging Tensorflow Lite Model

    Hi there,

    First off, just wanted to say thanks for creating such a great tool - Netron is very useful.

    I'm having an issue that likely stems from Tensorflow, rather than from Netron, but thought you might have some insights. In my flow, I use TF 1.15 to go from .ckpt --> frozen .pb --> .tflite. Normally it works reasonably smoothly, but a recent run shows an issue with the .tflite file: it is created without errors, it runs, but it performs poorly. Opening it with Netron shows that the activation functions (relu6 in this case) have been removed for every layer. Opening the equivalent .pb file in Netron shows the relu6 functions are present.

    Have you seen any cases in which Netron struggled with a TF Lite model (perhaps it can open, but isn't displaying correctly)? Also, how did you figure out the format for .tflite files (perhaps knowing this would allow me to debug it more deeply)?

    Thanks in advance.

    no repro 
    opened by mm7721 12
  • add armnn serialized format support

    add armnn serialized format support

    here's patch to support armnn format. (experimental)

    armnn-schema.js is compiled from ArmnnSchema.fbs included in armNN serailizer.

    see also:

    armnn: https://github.com/ARM-software/armnn

    As mensioned in #363, I will check items in below:

    • [x] Add sample files to test/models.json and run node test/test.js armnn
    • [x] Add tools/armnn script and sync, schema to automate regenerating armnn-schema.js
    • [x] Add tools/armnn script to run as part of ./Makefile
    • [x] Run make lint
    opened by Tee0125 12
  • TorchScript: Argument names to match runtime

    TorchScript: Argument names to match runtime

    Hi, there is some questions about node's name which in pt model saved by TorchScript. I use netron to view my pt model exported by torch.jit.save(),but the node's name doesn't match with it's real name resolved by TorchScript interface. It looks like the names in pt are arranged numerically from smallest to largest,but this is clearly not the case when they are parsed from TorchScript's interface. I wonder how this kind of situation can be solved, thanks a lot !! Looking forward to your reply.

    help wanted 
    opened by daodaoawaker 11
  • Support torch.fx IR visualization using netron

    Support torch.fx IR visualization using netron

    torch.fx is a library in PyTorch 1.8 that allows python-python model transformations. It works by symbolically tracing the PyTorch model into a graph (fx.GraphModule), which can be transformed and finally exported back to code, or used as a nn.Module directly. Currently there is no mechanism to import the graph IR into netron. An indirect path is to export to ONNX to visualize, which is not as useful if debugging transformations that potentially break ONNX exportability. It seems valuable to visualize the traced graph directly in netron.

    feature help wanted no repro 
    opened by sjain-stanford 11
  • TorchScript unsupported functions in after update

    TorchScript unsupported functions in after update

    I have a lot of basic model files saved in TorchScript and they were able to be opened weeks ago. However I cannot many of them after update Netron to v3.9.1. Many common functions are not supported not, e.g. torch.constant_pad_nd, torch.bmm, torch.avg_pool3d.

    opened by lujq96 11
  • OpenVINO IR v10 LSTM support

    OpenVINO IR v10 LSTM support

    • Netron app and version: 4.4.4
    • OS and browser version: Windows 10 64bit

    Steps to Reproduce:

    1. Open OpenVINO IR XML file in netron

    Please attach or link model files to reproduce the issue if necessary.

    I cannot share the proprietary model that shows dozens of disconnected nodes, but the one linked below does show disconnected subgraphs after conversion to OpenVINO IR. Note that the IR generated using the --generate_deprecated_IR_V7 option displays correctly.

    https://github.com/ARM-software/ML-KWS-for-MCU/blob/master/Pretrained_models/Basic_LSTM/Basic_LSTM_S.pb

    Convert using:

    python 'C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo.py' --input_model .\Basic_LSTM_S.pb --input=Reshape:0 --input_shape=[1,490] --output=Output-Layer/add

    This results in the following disconnected graph display:

    image

    no repro external bug 
    opened by mdeisher 10
  • Full support for scikit-learn (joblib)

    Full support for scikit-learn (joblib)

    For recoverable estimator persistence scikit-learn recommends to use joblib (instead of pickle). Sidenote: It is possible to export trained models into ONNX or PMML but the estimators are not recoverable. For more info refer to here.

    bug 
    opened by fkromer 9
  • Export full size image

    Export full size image

    I have onnx file successfully exported from mmsegmentation (swin-transformer), huge model (975.4) MB, I managed to open it in netron, however when I try to export it and preview in full size its blured.

    Any way I can fix it ? Thanks

    no repro bug 
    opened by adrianodac 0
  • TorchScript: torch.jit.mobile.serialization support

    TorchScript: torch.jit.mobile.serialization support

    Export PyTorch model to FlatBuffers file:

    import torch
    import torchvision
    model = torchvision.models.resnet34(weights=torchvision.models.ResNet34_Weights.DEFAULT)
    torch.jit.save_jit_module_to_flatbuffer(torch.jit.script(model), 'resnet34.ff')
    

    Sample files: scriptmodule.ff.zip squeezenet1_1_traced.ff.zip

    feature 
    opened by lutzroeder 0
  • MegEngine: fix some bugs

    MegEngine: fix some bugs

    fix some bugs of megengine C++ model (.mge) visualization:

    1. show the shape of the middle tensor;
    2. fix scope matching model identifier (mgv2) due to possible leading information;

    please help review, thanks~

    opened by Ysllllll 0
  • TorchScript server

    TorchScript server

    import torch
    import torchvision
    import torch.utils.tensorboard
    model = torchvision.models.detection.fasterrcnn_resnet50_fpn()
    script = torch.jit.script(model)
    script.save('fasterrcnn_resnet50_fpn.pt')
    with torch.utils.tensorboard.SummaryWriter('log') as writer:
        writer.add_graph(script, ())
    

    fasterrcnn_resnet50_fpn.pt.zip

    feature 
    opened by lutzroeder 0
Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, and finding their unique parameters (e.g. death rate).

DINN We introduce Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, a

19 Dec 10, 2022
VM3000 Microphones

VM3000-Microphones This project was completed by Ricky Leman under the supervision of Dr Ben Travaglione and Professor Melinda Hodkiewicz as part of t

UWA System Health Lab 0 Jun 04, 2021
Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations, CVPR 2019 (Oral)

Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations The code of: Weakly Supervised Learning of Instance Segmentation with I

Jiwoon Ahn 472 Dec 29, 2022
Towards Representation Learning for Atmospheric Dynamics (AtmoDist)

Towards Representation Learning for Atmospheric Dynamics (AtmoDist) The prediction of future climate scenarios under anthropogenic forcing is critical

Sebastian Hoffmann 4 Dec 15, 2022
JAX bindings to the Flatiron Institute Non-uniform Fast Fourier Transform (FINUFFT) library

JAX bindings to FINUFFT This package provides a JAX interface to (a subset of) the Flatiron Institute Non-uniform Fast Fourier Transform (FINUFFT) lib

Dan Foreman-Mackey 32 Oct 15, 2022
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

Yonglong Tian 2.2k Jan 08, 2023
This is the repository for The Machine Learning Workshops, published by AI DOJO

This is the repository for The Machine Learning Workshops, published by AI DOJO. It contains all the workshop's code with supporting project files necessary to work through the code.

AI Dojo 12 May 06, 2022
Official implementation of Rich Semantics Improve Few-Shot Learning (BMVC, 2021)

Rich Semantics Improve Few-Shot Learning Paper Link Abstract : Human learning benefits from multi-modal inputs that often appear as rich semantics (e.

Mohamed Afham 11 Jul 26, 2022
Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Hector Kohler 0 Mar 30, 2022
A PaddlePaddle version image model zoo.

Paddle-Image-Models English | 简体中文 A PaddlePaddle version image model zoo. Install Package Install by pip: $ pip install ppim Install by wheel package

AgentMaker 131 Dec 07, 2022
Development of IP code based on VIPs and AADM

Sparse Implicit Processes In this repository we include the two different versions of the SIP code developed for the article Sparse Implicit Processes

1 Aug 22, 2022
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 04, 2021
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
Automate issue discovery for your projects against Lightning nightly and releases.

Automated Testing for Lightning EcoSystem Projects Automate issue discovery for your projects against Lightning nightly and releases. You get CPUs, Mu

Pytorch Lightning 41 Dec 24, 2022
The official repository for BaMBNet

BaMBNet-Pytorch Paper

Junjun Jiang 18 Dec 04, 2022
Node for thenewboston digital currency network.

Project setup For project setup see INSTALL.rst Community Join the community to stay updated on the most recent developments, project roadmaps, and ra

thenewboston 27 Jul 08, 2022
Tgbox-bench - Simple TGBOX upload speed benchmark

TGBOX Benchmark This script will benchmark upload speed to TGBOX storage. Build

Non 1 Jan 09, 2022
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22)

Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22) Ok-Topk is a scheme for distributed training with sparse gradients

Shigang Li 9 Oct 29, 2022
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022