netron | neural network , deep learning | Machine Learning library

 by   lutzroeder JavaScript Version: 7.5.0 License: MIT

kandi X-RAY | netron Summary

kandi X-RAY | netron Summary

netron is a JavaScript library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. netron has no vulnerabilities, it has a Permissive License and it has medium support. However netron has 13 bugs. You can install using 'npm i graphview' or download it from GitHub, npm.

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Caffe, Keras, Darknet, PaddlePaddle, ncnn, MNN, Core ML, RKNN, MXNet, MindSpore Lite, TNN, Barracuda, Tengine, CNTK, TensorFlow.js, Caffe2 and UFF. Netron has experimental support for PyTorch, TensorFlow, TorchScript, OpenVINO, Torch, Vitis AI, Arm NN, BigDL, Chainer, Deeplearning4j, MediaPipe, ML.NET and scikit-learn.

            kandi-support Support

              netron has a medium active ecosystem.
              It has 23143 star(s) with 2489 fork(s). There are 290 watchers for this library.
              There were 10 major release(s) in the last 6 months.
              There are 26 open issues and 914 have been closed. On average issues are closed in 101 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of netron is 7.5.0

            kandi-Quality Quality

              netron has 13 bugs (2 blocker, 0 critical, 5 major, 6 minor) and 49 code smells.

            kandi-Security Security

              netron has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              netron code analysis shows 0 unresolved vulnerabilities.
              There are 3 security hotspots that need review.

            kandi-License License

              netron is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              netron releases are available to install and integrate.
              Deployable package is available in npm.
              Installation instructions are available. Examples and code snippets are not available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed netron and discovered the below as its top functions. This is intended to give you an instant insight into netron implemented functionality, and help decide if they suit your requirements.
            • Merge a single entry
            • handle a writable entry
            Get all kandi verified functions for this library.

            netron Key Features

            No Key Features are available at this moment for netron.

            netron Examples and Code Snippets

            How to create graph in fragment?
            JavaScriptdot img1Lines of Code : 52dot img1License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            public View onCreateView(LayoutInflater inflater, ViewGroup container,
                                 Bundle savedInstanceState) 
                View view = inflater.inflate(, container, false);
                GraphView graph = (GraphView) view.findViewByI
            How to add data from firebase to graphview dynamically?
            JavaScriptdot img2Lines of Code : 17dot img2License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            new DataPoint(pointValue, pointValue1), true, 200
            GraphView graph = findViewById(;
            LineGraphSeries series = new LineGraphSeries<>(new DataPoint[]{
                new DataPoint(0, 1),
                new DataPoint(1, 5),
            Change text color in Graphview in android
            JavaScriptdot img3Lines of Code : 55dot img3License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy

            Community Discussions


            PyTorch to ONNX export, ATen operators not supported, onnxruntime hangs out
            Asked 2022-Mar-03 at 14:05

            I want to export roberta-base based language model to ONNX format. The model uses ROBERTA embeddings and performs text classification task.



            Answered 2022-Mar-01 at 20:25

            Have you tried to export after defining the operator for onnx? Something along the lines of the following code by Huawei.

            On another note, when loading a model, you can technically override anything you want. Putting a specific layer to equal your modified class that inherits the original, keeps the same behavior (input and output) but execution of it can be modified. You can try to use this to save the model with changed problematic operators, transform it in onnx, and fine tune in such form (or even in pytorch).

            This generally seems best solved by the onnx team, so long term solution might be to post a request for that specific operator on the github issues page (but probably slow).



            ONNX model checker fails while ONNX runtime works fine when `tf.function` is used to decorate memeber function with loop
            Asked 2021-Dec-28 at 21:20

            When a tensorflow model contains tf.function decorated function with for loop in it, the tf->onnx conversion yields warnings:



            Answered 2021-Dec-28 at 21:20

            The problem is in the way you specified the shape of accumm_var.

            In the input signature you have tf.TensorSpec(shape=None, dtype=tf.float32). Reading the code I see that you are passing a scalar tensor. A scalar tensor is a 0-Dimension tensor, so you should use shape=[] instead of shape=None.

            I run here without warnings after annotating extra_function with



            Why is it that when viewing the architecture in Netron, the normalization layer that goes right after the convolutional layer is not shown?
            Asked 2021-Dec-08 at 10:15

            I test some changes on a Convolutional neural network architecture. I tried to add BatchNorm layer right after conv layer and than add activation layer. Then I swapped activation layer with BatchNorm layer.



            Answered 2021-Dec-08 at 10:15

            The fact that you cannot see Batch Norm when it follows convolution operation has to do with Batch Norm Folding. The convolution operation followed by Batch Norm can be replaced with just one convolution only with different weights.

            I assume, for Netron visualization you first convert to ONNX format.

            In PyTorch, Batch Norm Folding optimization is performed by default by torch.onnx.export, where eval mode is default one. You can disable this behavior by converting to ONNX in train mode:

            torch.onnx.export(..., training=TrainingMode.TRAINING,...)

            Then you should be able to see all Batch Norm layers in your graph.



            How to get Tflite model output in c++?
            Asked 2021-Nov-28 at 17:53

            I have a tflite model for mask detection with a sigmoid layer that outputs values between 0[mask] and 1[no_mask]

            I examined the input and output node using netron and here's what I got:

            I tested the model for inference in python and it works great.



            Answered 2021-Nov-28 at 17:53

            The code now works fine with these changes :



            How to create a TensorFloat for a shape that has unknown component?
            Asked 2021-Jun-07 at 16:08

            I have followed this example to bind input and output to a ONNX model.



            Answered 2021-Jun-07 at 16:08

            When creating a tensor that will be used in conjuction with a model input feature that is defined with free dimensions (ie: "unk_518"), you need to specify the actual concrete dimension of the tensor.

            In your case it looks like you are using SqeezeNet. The first parameter of SqueezeNet corresponds to the batch dimension of the input and so refers to the number of images you wish to bind and run inference on.

            Replace the "unk_518" with the batch size that you wish to run inference on:



            This model is not supported: Input tensor 0 does not have a name
            Asked 2021-Apr-12 at 21:43

            On Android Studio, I'm not able to view the model metadata, even though I had added the metadata in Python manually. I get the error:

            This model is not supported: input tensor 0 does not have a name.

            My attempts at fixing:

            I added the layer name to the tensorflow input layer, using:



            Answered 2021-Apr-12 at 21:43

            I fixed it by copying 1 line from the documentation, I needed to add a specific property to the TensorMetadataT object, = "image". This does not come from the model layer name, but needs to be manually added.



            CUSTOM : Operation is working on an unsupported data type EDGETPU
            Asked 2021-Mar-30 at 06:00

            I am trying to retrain custom object detector model for Coral USB and follow coral ai tutorials from these link;

            After retrained ssd_mobilenet_v2 model, converting edge tpu models with edge tpu compiler. Compiler result are these ;

            Operator Count Status CUSTOM 1 Operation is working on an unsupported data type ADD 10 Mapped to Edge TPU LOGISTIC 1 Mapped to Edge TPU CONCATENATION 2 Mapped to Edge TPU RESHAPE 13 Mapped to Edge TPU CONV_2D 55 Mapped to Edge TPU DEPTHWISE_CONV_2D 17 Mapped to Edge TPU

            And visualize from netron ;

            "Custom" operator not mapped. All operations are mapped and worked on tpu but "custom" is working on cpu. I saw same operator in ssd_mobilenet_v1

            How i can convert all operators to edgetpu models? What is the custom operator ? ( you can find supported operators from here



            Answered 2021-Mar-30 at 06:00

            This is the correct output for a SSD model. The TFLite_Detection_PostProcess is the custom op that is not run on the EdgeTPU. If you run netron on one of our default SSD models on, you'll see the PostProcess runs on CPU in that case.

            In the case of your model, every part of the of the model has been successfully converted. The last stage (which takes the model output and converts it to various usable outputs) is a custom implementation in TFLite that is already optimized for speed but is generic compute, not TFLite ops that the EdgeTPU accelerates.



            Tensorflow Quantization - Failed to parse the model: pybind11::init(): factory function returned nullptr
            Asked 2021-Mar-25 at 15:45

            I'm working on a TensorFlow model to be deployed on an embedded system. For this purpose, I need to quantize the model to int8. The model is composed of three distinct models:

            1. CNN as a feature extractor
            2. TCN for temporal prediction
            3. FC/Dense as last classfier.

            I implemented the TCN starting from this post with some modifications. In essence, the TCN is just a set of 1D convolutions (with some 0-padding) plus an add operation.



            Answered 2021-Mar-25 at 15:45

            As suggested by @JaesungChung, the problem seems to be solved using tf-nightly (I tested on 2.5.0-dev20210325).

            It's possible to obtain the same effect in 2.4.0 using a workaround and transforming the Conv1D into Conv2D with a width of 1 and using a flat kernel (1, kernel_size).



            Finding dynamic tensors in a tflite model
            Asked 2021-Mar-18 at 01:24

            I am currently experiencing the following error when loading a tflite model using the C API:

            ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.

            The tflite model can be found here. It is a tflite conversion of the LEAF model.

            The input and output tensors upon inspection seem to have static sizes. I have inspected the model with Netron and cannot find any dynamic tensors, however I may have overlooked. Is there a way to see which tensors specifically are causing an issue with their dynamic tensors?



            Answered 2021-Mar-18 at 01:24

            Even though there are no dynamic size tensors in the graph, the above graph has a control flow op, While op. Currently, graphs with control flow ops are regarded as dynamic graphs and those graphs are not supported through the hardware acceleration delegates, which allow the only static graph structure.



            while running netron on colab, getting this "OSError: [Errno 98] Address already in use" error
            Asked 2021-Feb-13 at 04:37

            I'm using Netron, for visualizing the model on Colab. as shown in this notebook line 11. when I run the following script to view the model,



            Answered 2021-Feb-13 at 04:37

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install netron

            macOS: Download the .dmg file or run brew install netron. Linux: Download the .AppImage file or run snap install netron. Windows: Download the .exe installer or run winget install -s winget netron. Browser: Start the browser version. Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').


            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
          • PyPI

            pip install netron

          • CLONE
          • HTTPS


          • CLI

            gh repo clone lutzroeder/netron

          • sshUrl


          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link