kandi background
Explore Kits

netron | neural network , deep learning | Machine Learning library

 by   lutzroeder JavaScript Version: v5.7.1 License: MIT

 by   lutzroeder JavaScript Version: v5.7.1 License: MIT

Download this library from

kandi X-RAY | netron Summary

netron is a JavaScript library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. netron has no vulnerabilities, it has a Permissive License and it has medium support. However netron has 13 bugs. You can install using 'npm i graphview' or download it from GitHub, npm.
Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Caffe, Keras, Darknet, PaddlePaddle, ncnn, MNN, Core ML, RKNN, MXNet, MindSpore Lite, TNN, Barracuda, Tengine, CNTK, TensorFlow.js, Caffe2 and UFF. Netron has experimental support for PyTorch, TensorFlow, TorchScript, OpenVINO, Torch, Vitis AI, Arm NN, BigDL, Chainer, Deeplearning4j, MediaPipe, ML.NET and scikit-learn.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • netron has a medium active ecosystem.
  • It has 18318 star(s) with 2126 fork(s). There are 263 watchers for this library.
  • There were 2 major release(s) in the last 12 months.
  • There are 17 open issues and 767 have been closed. On average issues are closed in 9 days. There are 1 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of netron is v5.7.1
netron Support
Best in #Machine Learning
Average in #Machine Learning
netron Support
Best in #Machine Learning
Average in #Machine Learning

quality kandi Quality

  • netron has 13 bugs (2 blocker, 0 critical, 5 major, 6 minor) and 49 code smells.
netron Quality
Best in #Machine Learning
Average in #Machine Learning
netron Quality
Best in #Machine Learning
Average in #Machine Learning

securitySecurity

  • netron has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • netron code analysis shows 0 unresolved vulnerabilities.
  • There are 3 security hotspots that need review.
netron Security
Best in #Machine Learning
Average in #Machine Learning
netron Security
Best in #Machine Learning
Average in #Machine Learning

license License

  • netron is licensed under the MIT License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
netron License
Best in #Machine Learning
Average in #Machine Learning
netron License
Best in #Machine Learning
Average in #Machine Learning

buildReuse

  • netron releases are available to install and integrate.
  • Deployable package is available in npm.
  • Installation instructions are available. Examples and code snippets are not available.
netron Reuse
Best in #Machine Learning
Average in #Machine Learning
netron Reuse
Best in #Machine Learning
Average in #Machine Learning
Top functions reviewed by kandi - BETA

kandi has reviewed netron and discovered the below as its top functions. This is intended to give you an instant insight into netron implemented functionality, and help decide if they suit your requirements.

  • Merge a single entry
    • handle a writable entry

      Get all kandi verified functions for this library.

      Get all kandi verified functions for this library.

      netron Key Features

      Visualizer for neural network, deep learning, and machine learning models

      netron Examples and Code Snippets

      See all related Code Snippets

      ONNX model checker fails while ONNX runtime works fine when `tf.function` is used to decorate memeber function with loop

      copy iconCopydownload iconDownload
      tf.function(
          input_signature=[
          tf.TensorSpec(shape=[None,None], dtype=tf.int32),
          tf.TensorSpec(shape=[None,None], dtype=tf.float32),
          tf.TensorSpec(shape=[], dtype=tf.float32),
          ])
      

      How to get Tflite model output in c++?

      copy iconCopydownload iconDownload
      memcpy(input,img.data,32*32*sizeof(float)); 
      
       input = inputImg.ptr<float>(0);
      
      float* output = interpreter->typed_output_tensor<float>(0);
      
      memcpy(input,img.data,32*32*sizeof(float)); 
      
       input = inputImg.ptr<float>(0);
      
      float* output = interpreter->typed_output_tensor<float>(0);
      
      memcpy(input,img.data,32*32*sizeof(float)); 
      
       input = inputImg.ptr<float>(0);
      
      float* output = interpreter->typed_output_tensor<float>(0);
      

      How to create a TensorFloat for a shape that has unknown component?

      copy iconCopydownload iconDownload
      int64_t batch_size = 1;
      std::vector<int64_t> shape({ batch_size, 224, 224, 3 }); // Note: this doesn't compile since the first component is a string!
      binding.Bind(L"Image:0", TensorFloat::Create(shape));
      

      This model is not supported: Input tensor 0 does not have a name

      copy iconCopydownload iconDownload
      input_meta.name = "image_input"
      

      Process output data from YOLOv5 TFlite

      copy iconCopydownload iconDownload
      def classFilter(classdata):
          classes = []  # create a list
          for i in range(classdata.shape[0]):         # loop through all predictions
              classes.append(classdata[i].argmax())   # get the best classification location
          return classes  # return classes (int)
      
      def YOLOdetect(output_data):  # input = interpreter, output is boxes(xyxy), classes, scores
          output_data = output_data[0]                # x(1, 25200, 7) to x(25200, 7)
          boxes = np.squeeze(output_data[..., :4])    # boxes  [25200, 4]
          scores = np.squeeze( output_data[..., 4:5]) # confidences  [25200, 1]
          classes = classFilter(output_data[..., 5:]) # get classes
          # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
          x, y, w, h = boxes[..., 0], boxes[..., 1], boxes[..., 2], boxes[..., 3] #xywh
          xyxy = [x - w / 2, y - h / 2, x + w / 2, y + h / 2]  # xywh to xyxy   [4, 25200]
      
          return xyxy, classes, scores  # output is boxes(x,y,x,y), classes(int), scores(float) [predictions length]
      
      """Output data"""
      output_data = interpreter.get_tensor(output_details[0]['index'])  # get tensor  x(1, 25200, 7)
      xyxy, classes, scores = YOLOdetect(output_data) #boxes(x,y,x,y), classes(int), scores(float) [25200]
      
      for i in range(len(scores)):
          if ((scores[i] > 0.1) and (scores[i] <= 1.0)):
              H = frame.shape[0]
              W = frame.shape[1]
              xmin = int(max(1,(xyxy[0][i] * W)))
              ymin = int(max(1,(xyxy[1][i] * H)))
              xmax = int(min(H,(xyxy[2][i] * W)))
              ymax = int(min(W,(xyxy[3][i] * H)))
      
              cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
              ...
      
      def classFilter(classdata):
          classes = []  # create a list
          for i in range(classdata.shape[0]):         # loop through all predictions
              classes.append(classdata[i].argmax())   # get the best classification location
          return classes  # return classes (int)
      
      def YOLOdetect(output_data):  # input = interpreter, output is boxes(xyxy), classes, scores
          output_data = output_data[0]                # x(1, 25200, 7) to x(25200, 7)
          boxes = np.squeeze(output_data[..., :4])    # boxes  [25200, 4]
          scores = np.squeeze( output_data[..., 4:5]) # confidences  [25200, 1]
          classes = classFilter(output_data[..., 5:]) # get classes
          # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
          x, y, w, h = boxes[..., 0], boxes[..., 1], boxes[..., 2], boxes[..., 3] #xywh
          xyxy = [x - w / 2, y - h / 2, x + w / 2, y + h / 2]  # xywh to xyxy   [4, 25200]
      
          return xyxy, classes, scores  # output is boxes(x,y,x,y), classes(int), scores(float) [predictions length]
      
      """Output data"""
      output_data = interpreter.get_tensor(output_details[0]['index'])  # get tensor  x(1, 25200, 7)
      xyxy, classes, scores = YOLOdetect(output_data) #boxes(x,y,x,y), classes(int), scores(float) [25200]
      
      for i in range(len(scores)):
          if ((scores[i] > 0.1) and (scores[i] <= 1.0)):
              H = frame.shape[0]
              W = frame.shape[1]
              xmin = int(max(1,(xyxy[0][i] * W)))
              ymin = int(max(1,(xyxy[1][i] * H)))
              xmax = int(min(H,(xyxy[2][i] * W)))
              ymax = int(min(W,(xyxy[3][i] * H)))
      
              cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
              ...
      
      def classFilter(classdata):
          classes = []  # create a list
          for i in range(classdata.shape[0]):         # loop through all predictions
              classes.append(classdata[i].argmax())   # get the best classification location
          return classes  # return classes (int)
      
      def YOLOdetect(output_data):  # input = interpreter, output is boxes(xyxy), classes, scores
          output_data = output_data[0]                # x(1, 25200, 7) to x(25200, 7)
          boxes = np.squeeze(output_data[..., :4])    # boxes  [25200, 4]
          scores = np.squeeze( output_data[..., 4:5]) # confidences  [25200, 1]
          classes = classFilter(output_data[..., 5:]) # get classes
          # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
          x, y, w, h = boxes[..., 0], boxes[..., 1], boxes[..., 2], boxes[..., 3] #xywh
          xyxy = [x - w / 2, y - h / 2, x + w / 2, y + h / 2]  # xywh to xyxy   [4, 25200]
      
          return xyxy, classes, scores  # output is boxes(x,y,x,y), classes(int), scores(float) [predictions length]
      
      """Output data"""
      output_data = interpreter.get_tensor(output_details[0]['index'])  # get tensor  x(1, 25200, 7)
      xyxy, classes, scores = YOLOdetect(output_data) #boxes(x,y,x,y), classes(int), scores(float) [25200]
      
      for i in range(len(scores)):
          if ((scores[i] > 0.1) and (scores[i] <= 1.0)):
              H = frame.shape[0]
              W = frame.shape[1]
              xmin = int(max(1,(xyxy[0][i] * W)))
              ymin = int(max(1,(xyxy[1][i] * H)))
              xmax = int(min(H,(xyxy[2][i] * W)))
              ymax = int(min(W,(xyxy[3][i] * H)))
      
              cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
              ...
      

      Kernel and Recurrent Kernel in Keras LSTMs

      copy iconCopydownload iconDownload
      1 * 4 * units = kernel
      
      units * (4 * units) = recurrent kernel
      
      1 * 4 * units = kernel
      
      units * (4 * units) = recurrent kernel
      

      How to know input/output layer names and sizes for Pytorch model?

      copy iconCopydownload iconDownload
      import io
      import numpy as np
      from torch import nn
      import torch.utils.model_zoo as model_zoo
      import torch.onnx
      from torchvision import models    
      model = torch.load('model_final.pth')
      model.eval()
      print('Finished loading model!')
      print(model)
      device = torch.device("cpu" if args.cpu else "cuda")
      model = model.to(device)
      
      # ------------------------ export -----------------------------
      output_onnx = 'super_resolution.onnx'
      print("==> Exporting model to ONNX format at '{}'".format(output_onnx))
      input_names = ["input0"]
      output_names = ["output0","output1"]
      inputs = torch.randn(1, 3, 1080, 1920).to(device)
      
      torch_out = torch.onnx._export(model, inputs, output_onnx, export_params=True, verbose=False,
                                     input_names=input_names, output_names=output_names)
      

      How to pass image to tflite model in android

      copy iconCopydownload iconDownload
      ImageProcessor imageProcessor =
              new ImageProcessor.Builder()
                  .add(new ResizeWithCropOrPadOp(cropSize, cropSize))
                  .add(new ResizeOp(imageSizeX, imageSizeY, ResizeMethod.NEAREST_NEIGHBOR))
                  .add(new Rot90Op(numRoration))
                  .add(getPreprocessNormalizeOp())
                  .build();
      return imageProcessor.process(inputImageBuffer);
      
      tflite.run(inputImageBuffer.getBuffer(), outputProbabilityBuffer.getBuffer().rewind());
      
      ImageProcessor imageProcessor =
              new ImageProcessor.Builder()
                  .add(new ResizeWithCropOrPadOp(cropSize, cropSize))
                  .add(new ResizeOp(imageSizeX, imageSizeY, ResizeMethod.NEAREST_NEIGHBOR))
                  .add(new Rot90Op(numRoration))
                  .add(getPreprocessNormalizeOp())
                  .build();
      return imageProcessor.process(inputImageBuffer);
      
      tflite.run(inputImageBuffer.getBuffer(), outputProbabilityBuffer.getBuffer().rewind());
      

      Failed to run the tflite model on Interpreter due to Internal Error

      copy iconCopydownload iconDownload
      import numpy as np
      import tensorflow as tf
      
      # Load the TFLite model and allocate tensors.
      interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
      interpreter.allocate_tensors()
      
      # Get input and output tensors.
      input_details = interpreter.get_input_details()
      output_details = interpreter.get_output_details()
      
      # Test the model on random input data.
      input_shape = input_details[0]['shape']
      input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
      interpreter.set_tensor(input_details[0]['index'], input_data)
      
      interpreter.invoke()
      
      # The function `get_tensor()` returns a copy of the tensor data.
      # Use `tensor()` in order to get a pointer to the tensor.
      output_data = interpreter.get_tensor(output_details[0]['index'])
      print(output_data)
      

      How to get weights in tflite using c++ api?

      copy iconCopydownload iconDownload
      TfLiteContext* context; // You would usually have access to this already.
      TfLiteNode* node;       // <obtain this from the graph>;
      
      for (int i = 0; i < node->inputs->size; ++i) {
        TfLiteTensor* input_tensor = GetInput(context, node, i);
      
        // Determine if this is a weight tensor.
        // Usually the weights will be memory-mapped read-only tensor
        // directly baked in the TFLite model (flatbuffer).
        if (input_tensor->allocation_type == kTfLiteMmapRo) {
          // Read the values from input_tensor, based on its type.
          // For example, if you have float weights,
          const float* weights = GetTensorData<float>(input_tensor);
      
          // <read the weight values...>
        }
      }
      

      See all related Code Snippets

      Community Discussions

      Trending Discussions on netron
      • PyTorch to ONNX export, ATen operators not supported, onnxruntime hangs out
      • ONNX model checker fails while ONNX runtime works fine when `tf.function` is used to decorate memeber function with loop
      • Why is it that when viewing the architecture in Netron, the normalization layer that goes right after the convolutional layer is not shown?
      • How to get Tflite model output in c++?
      • How to create a TensorFloat for a shape that has unknown component?
      • This model is not supported: Input tensor 0 does not have a name
      • CUSTOM : Operation is working on an unsupported data type EDGETPU
      • Tensorflow Quantization - Failed to parse the model: pybind11::init(): factory function returned nullptr
      • Finding dynamic tensors in a tflite model
      • while running netron on colab, getting this &quot;OSError: [Errno 98] Address already in use&quot; error
      Trending Discussions on netron

      QUESTION

      PyTorch to ONNX export, ATen operators not supported, onnxruntime hangs out

      Asked 2022-Mar-03 at 14:05

      I want to export roberta-base based language model to ONNX format. The model uses ROBERTA embeddings and performs text classification task.

      from torch import nn
      import torch.onnx
      import onnx
      import onnxruntime
      import torch
      import transformers
      

      from logs:

      17: pytorch: 1.10.2+cu113
      18: CUDA: False
      21: device: cpu
      26: onnxruntime: 1.10.0
      27: onnx: 1.11.0
      

      PyTorch export

      batch_size = 3
      model_input = {
          'input_ids': torch.empty(batch_size, 256, dtype=torch.int).random_(32000),
          'attention_mask': torch.empty(batch_size, 256, dtype=torch.int).random_(2),
          'seq_len':  torch.empty(batch_size, 1, dtype=torch.int).random_(256)
      }
      model_file_path = os.path.join("checkpoints", 'model.onnx')
      
      torch.onnx.export(da_inference.model,               # model being run
                        model_input,                         # model input (or a tuple for multiple inputs)
                        model_file_path,   # where to save the model (can be a file or file-like object)
                        export_params=True,        # store the trained parameter weights inside the model file
                        opset_version=11,          # the ONNX version to export the model to
                        operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK,
                        do_constant_folding=True,  # whether to execute constant folding for optimization
                        input_names = ['input_ids', 'attention_mask', 'seq_len'],   # the model's input names
                        output_names = ['output'], # the model's output names
                        dynamic_axes={'input_ids': {0 : 'batch_size'},
                                      'attention_mask': {0 : 'batch_size'},
                                      'seq_len': {0 : 'batch_size'},
                                      'output' : {0 : 'batch_size'}},
                       verbose=True)
      

      I know there maybe problems converting some operators from ATen (A Tensor Library for C++11), if included in model architecture PyTorch Model Export to ONNX Failed Due to ATen.

      Exports succeeds if I set the parameter operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK which means 'leave as is ATen operators if not supported in ONNX'.

      PyTorch export function gives me the following warning:

      Warning: Unsupported operator ATen. No schema registered for this operator.
      Warning: Shape inference does not support models with experimental operators: ATen
      
      

      It looks like the only ATen operators in the model that are not converted to ONNX are situated inside layers LayerNorm.weight and LayerNorm.bias (I have several layers like that):

       %1266 : Float(3, 256, 768, strides=[196608, 768, 1], requires_grad=0, device=cpu) = 
      onnx::ATen[cudnn_enable=1, eps=1.0000000000000001e-05, normalized_shape=[768], operator="layer_norm"]
      (%1265, %model.utterance_rnn.base.encoder.layer.11.output.LayerNorm.weight,
       %model.utterance_rnn.base.encoder.layer.11.output.LayerNorm.bias)
      # /opt/conda/lib/python3.9/site-packages/torch/nn/functional.py:2347:0
      

      Than model check passes OK:

      model = onnx.load(model_file_path)
      # Check that the model is well formed
      onnx.checker.check_model(model)
      # Print a human readable representation of the graph
      print(onnx.helper.printable_graph(model.graph))
      

      I also can visualize computation graph using Netron.

      ATen node which presumably causes the problem is in the center of the screenshot from computation graph

      But when I try to perform inference using exported ONNX model it stalls with no logs or stdout. So this code will hang the system:

      model_file_path = os.path.join("checkpoints", "model.onnx")
      sess_options = onnxruntime.SessionOptions()
      sess_options.log_severity_level = 0
      ort_providers: List[str] = ["CUDAExecutionProvider"] if use_gpu else ['CPUExecutionProvider']
      session = InferenceSession(model_file_path, providers=ort_providers, sess_options=sess_options)
      

      Is there any suggestions to overcome this problem? From official documentation I see that torch.onnx models exported this way are probably runnable only by Caffe2.

      This layers are not inside the base frozen roberta model, so this is additional layers that I added by myself. Is it possible to substitute the offending layers with similar ones and retrain the model?

      Or Caffe2 is the best choice here and onnxruntime will not do the inference?

      Update: I retrained the model on the basis of BERT cased embeddings, but the problem persists. The same ATen operators are not converted in ONNX. It looks like the layers LayerNorm.weight and LayerNorm.bias are only in the model above BERT. So, what is your suggestions to change this layers and enable ONNX export?

      ANSWER

      Answered 2022-Mar-01 at 20:25

      Have you tried to export after defining the operator for onnx? Something along the lines of the following code by Huawei.

      On another note, when loading a model, you can technically override anything you want. Putting a specific layer to equal your modified class that inherits the original, keeps the same behavior (input and output) but execution of it can be modified. You can try to use this to save the model with changed problematic operators, transform it in onnx, and fine tune in such form (or even in pytorch).

      This generally seems best solved by the onnx team, so long term solution might be to post a request for that specific operator on the github issues page (but probably slow).

      Source https://stackoverflow.com/questions/71220867

      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

      Vulnerabilities

      No vulnerabilities reported

      Install netron

      macOS: Download the .dmg file or run brew install netron. Linux: Download the .AppImage file or run snap install netron. Windows: Download the .exe installer or run winget install -s winget netron. Browser: Start the browser version. Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').

      Support

      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

      DOWNLOAD this Library from

      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
      over 430 million Knowledge Items
      Find more libraries
      Reuse Solution Kits and Libraries Curated by Popular Use Cases
      Explore Kits

      Save this library and start creating your kit

      Share this Page

      share link
      Reuse Pre-built Kits with netron
      Consider Popular Machine Learning Libraries
      Try Top Libraries by lutzroeder
      Compare Machine Learning Libraries with Highest Support
      Compare Machine Learning Libraries with Highest Quality
      Compare Machine Learning Libraries with Highest Security
      Compare Machine Learning Libraries with Permissive License
      Compare Machine Learning Libraries with Highest Reuse
      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
      over 430 million Knowledge Items
      Find more libraries
      Reuse Solution Kits and Libraries Curated by Popular Use Cases
      Explore Kits

      Save this library and start creating your kit

      • © 2022 Open Weaver Inc.