Visualizer for neural network, deep learning, and machine learning models

Overview

Netron is a viewer for neural network, deep learning and machine learning models.

Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), TensorFlow Lite (.tflite), Caffe (.caffemodel, .prototxt), Darknet (.cfg), Core ML (.mlmodel), MNN (.mnn), MXNet (.model, -symbol.json), ncnn (.param), PaddlePaddle (.zip, __model__), Caffe2 (predict_net.pb), Barracuda (.nn), Tengine (.tmfile), TNN (.tnnproto), RKNN (.rknn), MindSpore Lite (.ms), UFF (.uff).

Netron has experimental support for TensorFlow (.pb, .meta, .pbtxt, .ckpt, .index), PyTorch (.pt, .pth), TorchScript (.pt, .pth), OpenVINO (.xml), Torch (.t7), Arm NN (.armnn), BigDL (.bigdl, .model), Chainer (.npz, .h5), CNTK (.model, .cntk), Deeplearning4j (.zip), MediaPipe (.pbtxt), ML.NET (.zip), scikit-learn (.pkl), TensorFlow.js (model.json, .pb).

Install

macOS: Download the .dmg file or run brew install netron

Linux: Download the .AppImage file or run snap install netron

Windows: Download the .exe installer or run winget install netron

Browser: Start the browser version.

Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').

Models

Sample model files to download or open using the browser version:

Comments
  • Windows app not closing properly

    Windows app not closing properly

    After the latest update, Netron remains open taking up memory and CPU after closing the program. I must close it through task manager each time. I am on Windows 10

    no repro 
    opened by idenc 22
  • TorchScript: ValueError: not enough values to unpack

    TorchScript: ValueError: not enough values to unpack

    • Netron app and version: web app 5.5.9?
    • OS and browser version: Manjaro GNOME on firefox 97.0.1

    Steps to Reproduce:

    1. use torch.broadcast_tensors
    2. export with torch.trace(...).save()
    3. open in netron.app

    I have also gotten a Unsupported function 'torch.broadcast_tensors', but have been unable to reproduce it due to this current error. Most likely, the fix for the following repro will cover two bugs.

    Please attach or link model files to reproduce the issue if necessary.

    image

    Repro:

    import torch
    
    class Test(torch.nn.Module):
        def forward(self, a, b):
            a, b = torch.broadcast_tensors(a, b)
            assert a.shape == b.shape == (3, 5)
            return a + b
    
    torch.jit.trace(
        Test(),
        (torch.ones(3, 1), torch.ones(1, 5)),
    ).save("foobar.pt")
    

    Zipped foobar.pt: foobar.zip

    help wanted bug 
    opened by pbsds 15
  • OpenVINO support

    OpenVINO support

    • [x] 1. Opening rm_lstm4f.xml results in TypeError (#192)
    • [x] 2. dot files are not opened any more - need to fix it (#192)
    • [x] 3. add preflight check for invalid xml and dot content
    • [x] 6. Add test files to ./test/models.json (#195) (#211)
    • [x] 9. Add support for the version 3 of IR (#196)
    • [x] 10. Category color support (#203)
    • [x] 11. -metadata.json for coloring, documentation and attribute default filtering (#203).
    • [x] 5. Filter attribute defaults based on -metadata.json to show fewer attributes in the graph
    • [ ] 7. Show weight tensors
    • [x] 8. Graph inputs and outputs should be exposed as Graph.inputs and Graph.outputs
    • [x] 12. Move to DOMParser
    • [x] 13. Remove dot support
    feature 
    opened by lutzroeder 15
  • RangeError: Maximum call stack size exceeded

    RangeError: Maximum call stack size exceeded

    • Netron app and version: 4.4.8 App and Browser
    • OS and browser version: Windows 10 + Chrome Version 84.0.4147.135

    Steps to Reproduce:

    EfficientDet-d0.zip

    Please attach or link model files to reproduce the issue if necessary.

    help wanted no repro bug 
    opened by ryusaeba 14
  • Debugging Tensorflow Lite Model

    Debugging Tensorflow Lite Model

    Hi there,

    First off, just wanted to say thanks for creating such a great tool - Netron is very useful.

    I'm having an issue that likely stems from Tensorflow, rather than from Netron, but thought you might have some insights. In my flow, I use TF 1.15 to go from .ckpt --> frozen .pb --> .tflite. Normally it works reasonably smoothly, but a recent run shows an issue with the .tflite file: it is created without errors, it runs, but it performs poorly. Opening it with Netron shows that the activation functions (relu6 in this case) have been removed for every layer. Opening the equivalent .pb file in Netron shows the relu6 functions are present.

    Have you seen any cases in which Netron struggled with a TF Lite model (perhaps it can open, but isn't displaying correctly)? Also, how did you figure out the format for .tflite files (perhaps knowing this would allow me to debug it more deeply)?

    Thanks in advance.

    no repro 
    opened by mm7721 12
  • add armnn serialized format support

    add armnn serialized format support

    here's patch to support armnn format. (experimental)

    armnn-schema.js is compiled from ArmnnSchema.fbs included in armNN serailizer.

    see also:

    armnn: https://github.com/ARM-software/armnn

    As mensioned in #363, I will check items in below:

    • [x] Add sample files to test/models.json and run node test/test.js armnn
    • [x] Add tools/armnn script and sync, schema to automate regenerating armnn-schema.js
    • [x] Add tools/armnn script to run as part of ./Makefile
    • [x] Run make lint
    opened by Tee0125 12
  • TorchScript: Argument names to match runtime

    TorchScript: Argument names to match runtime

    Hi, there is some questions about node's name which in pt model saved by TorchScript. I use netron to view my pt model exported by torch.jit.save(),but the node's name doesn't match with it's real name resolved by TorchScript interface. It looks like the names in pt are arranged numerically from smallest to largest,but this is clearly not the case when they are parsed from TorchScript's interface. I wonder how this kind of situation can be solved, thanks a lot !! Looking forward to your reply.

    help wanted 
    opened by daodaoawaker 11
  • Support torch.fx IR visualization using netron

    Support torch.fx IR visualization using netron

    torch.fx is a library in PyTorch 1.8 that allows python-python model transformations. It works by symbolically tracing the PyTorch model into a graph (fx.GraphModule), which can be transformed and finally exported back to code, or used as a nn.Module directly. Currently there is no mechanism to import the graph IR into netron. An indirect path is to export to ONNX to visualize, which is not as useful if debugging transformations that potentially break ONNX exportability. It seems valuable to visualize the traced graph directly in netron.

    feature help wanted no repro 
    opened by sjain-stanford 11
  • TorchScript unsupported functions in after update

    TorchScript unsupported functions in after update

    I have a lot of basic model files saved in TorchScript and they were able to be opened weeks ago. However I cannot many of them after update Netron to v3.9.1. Many common functions are not supported not, e.g. torch.constant_pad_nd, torch.bmm, torch.avg_pool3d.

    opened by lujq96 11
  • OpenVINO IR v10 LSTM support

    OpenVINO IR v10 LSTM support

    • Netron app and version: 4.4.4
    • OS and browser version: Windows 10 64bit

    Steps to Reproduce:

    1. Open OpenVINO IR XML file in netron

    Please attach or link model files to reproduce the issue if necessary.

    I cannot share the proprietary model that shows dozens of disconnected nodes, but the one linked below does show disconnected subgraphs after conversion to OpenVINO IR. Note that the IR generated using the --generate_deprecated_IR_V7 option displays correctly.

    https://github.com/ARM-software/ML-KWS-for-MCU/blob/master/Pretrained_models/Basic_LSTM/Basic_LSTM_S.pb

    Convert using:

    python 'C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo.py' --input_model .\Basic_LSTM_S.pb --input=Reshape:0 --input_shape=[1,490] --output=Output-Layer/add

    This results in the following disconnected graph display:

    image

    no repro external bug 
    opened by mdeisher 10
  • Full support for scikit-learn (joblib)

    Full support for scikit-learn (joblib)

    For recoverable estimator persistence scikit-learn recommends to use joblib (instead of pickle). Sidenote: It is possible to export trained models into ONNX or PMML but the estimators are not recoverable. For more info refer to here.

    bug 
    opened by fkromer 9
  • Export full size image

    Export full size image

    I have onnx file successfully exported from mmsegmentation (swin-transformer), huge model (975.4) MB, I managed to open it in netron, however when I try to export it and preview in full size its blured.

    Any way I can fix it ? Thanks

    no repro bug 
    opened by adrianodac 0
  • TorchScript: torch.jit.mobile.serialization support

    TorchScript: torch.jit.mobile.serialization support

    Export PyTorch model to FlatBuffers file:

    import torch
    import torchvision
    model = torchvision.models.resnet34(weights=torchvision.models.ResNet34_Weights.DEFAULT)
    torch.jit.save_jit_module_to_flatbuffer(torch.jit.script(model), 'resnet34.ff')
    

    Sample files: scriptmodule.ff.zip squeezenet1_1_traced.ff.zip

    feature 
    opened by lutzroeder 0
  • MegEngine: fix some bugs

    MegEngine: fix some bugs

    fix some bugs of megengine C++ model (.mge) visualization:

    1. show the shape of the middle tensor;
    2. fix scope matching model identifier (mgv2) due to possible leading information;

    please help review, thanks~

    opened by Ysllllll 0
  • TorchScript server

    TorchScript server

    import torch
    import torchvision
    import torch.utils.tensorboard
    model = torchvision.models.detection.fasterrcnn_resnet50_fpn()
    script = torch.jit.script(model)
    script.save('fasterrcnn_resnet50_fpn.pt')
    with torch.utils.tensorboard.SummaryWriter('log') as writer:
        writer.add_graph(script, ())
    

    fasterrcnn_resnet50_fpn.pt.zip

    feature 
    opened by lutzroeder 0
Reimplementation of Dynamic Multi-scale filters for Semantic Segmentation.

Paddle implementation of Dynamic Multi-scale filters for Semantic Segmentation.

Hongqiang.Wang 2 Nov 01, 2021
Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Support Vector Machine".

On the Equivalence between Neural Network and Support Vector Machine Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Suppo

Leslie 8 Oct 25, 2022
Implementation of Change-Based Exploration Transfer (C-BET)

Implementation of Change-Based Exploration Transfer (C-BET), as presented in Interesting Object, Curious Agent: Learning Task-Agnostic Exploration.

Simone Parisi 29 Dec 04, 2022
FSL-Mate: A collection of resources for few-shot learning (FSL).

FSL-Mate is a collection of resources for few-shot learning (FSL). In particular, FSL-Mate currently contains FewShotPapers: a paper list which tracks

Yaqing Wang 1.5k Jan 08, 2023
A Lightweight Experiment & Resource Monitoring Tool 📺

Lightweight Experiment & Resource Monitoring 📺 "Did I already run this experiment before? How many resources are currently available on my cluster?"

170 Dec 28, 2022
Implementation of GGB color space

GGB Color Space This package is implementation of GGB color space from Development of a Robust Algorithm for Detection of Nuclei and Classification of

Resha Dwika Hefni Al-Fahsi 2 Oct 06, 2021
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022
A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

MADGRAD Optimization Method A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization pip install madgrad Try it out! A best

Meta Research 774 Dec 31, 2022
load .txt to train YOLOX, same as Yolo others

YOLOX train your data you need generate data.txt like follow format (per line- one image). prepare one data.txt like this: img_path1 x1,y1,x2,y2,clas

LiMingf 18 Aug 18, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 02, 2022
This code provides a PyTorch implementation for OTTER (Optimal Transport distillation for Efficient zero-shot Recognition), as described in the paper.

Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation This repository contains PyTorch evaluation code, trainin

Meta Research 45 Dec 20, 2022
Pre-Trained Image Processing Transformer (IPT)

Pre-Trained Image Processing Transformer (IPT) By Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Cha

HUAWEI Noah's Ark Lab 332 Dec 18, 2022
Vision Transformer for 3D medical image registration (Pytorch).

ViT-V-Net: Vision Transformer for Volumetric Medical Image Registration keywords: vision transformer, convolutional neural networks, image registratio

Junyu Chen 192 Dec 20, 2022
Pipeline for employing a Lightweight deep learning models for LOW-power systems

PL-LOW A high-performance deep learning model lightweight pipeline that gradually lightens deep neural networks in order to utilize high-performance d

POSTECH Data Intelligence Lab 9 Aug 13, 2022
This is the implementation of GGHL (A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection)

GGHL: A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection This is the implementation of GGHL 👋 👋 👋 [Arxiv] [Google Drive][B

551 Dec 31, 2022
🏆 The 1st Place Submission to AICity Challenge 2021 Natural Language-Based Vehicle Retrieval Track (Alibaba-UTS submission)

AI City 2021: Connecting Language and Vision for Natural Language-Based Vehicle Retrieval 🏆 The 1st Place Submission to AICity Challenge 2021 Natural

82 Dec 29, 2022
Repo 4 basic seminar §How to make human machine readable"

WORK IN PROGRESS... Notebooks from the Seminar: Human Machine Readable WS21/22 Introduction into programming Georg Trogemann, Christian Heck, Mattis

experimental-informatics 3 May 29, 2022
Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge This is an implementation of the paper,

Mutian He 19 Oct 14, 2022
Distributed Arcface Training in Pytorch

Distributed Arcface Training in Pytorch

3 Nov 23, 2021
DeepLab-ResNet rebuilt in TensorFlow

DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Fr

Vladimir 1.2k Nov 04, 2022