edgetpu | Coral issue tracker (and legacy Edge TPU API source) | Computer Vision library

 by   google-coral C++ Version: frogfish License: Apache-2.0

kandi X-RAY | edgetpu Summary

kandi X-RAY | edgetpu Summary

edgetpu is a C++ library typically used in Artificial Intelligence, Computer Vision applications. edgetpu has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Coral issue tracker (and legacy Edge TPU API source)
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              edgetpu has a low active ecosystem.
              It has 363 star(s) with 116 fork(s). There are 35 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 103 open issues and 656 have been closed. On average issues are closed in 45 days. There are 5 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of edgetpu is frogfish

            kandi-Quality Quality

              edgetpu has no bugs reported.

            kandi-Security Security

              edgetpu has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              edgetpu is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              edgetpu releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of edgetpu
            Get all kandi verified functions for this library.

            edgetpu Key Features

            No Key Features are available at this moment for edgetpu.

            edgetpu Examples and Code Snippets

            Load a delegate from a library .
            pythondot img1Lines of Code : 45dot img1License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            def load_delegate(library, options=None):
              """Returns loaded Delegate object.
            
              Example usage:
            
              ```
              import tensorflow as tf
            
              try:
                delegate = tf.lite.experimental.load_delegate('delegate.so')
              except ValueError:
                // Fallback to CPU
            
                

            Community Discussions

            QUESTION

            How to use GPU in Docker to retrain an object detection model?
            Asked 2022-Mar-13 at 03:25

            I've been following this tutorial from google coral on retraining an object detection model in docker, and it explicitly states that this is for CPU training only, which is very slow.

            Is there an easy way to port this docker container to utilize the GPU (nvidia GTX 1080). I have installed nvidia-docker2, and successfully gotten my gpu passed into other containers, and as far as I know, also this one, using the --gpus all tag. The nvidia-smi command works from within my container, so I am almost certain that my GPU has been passed through successfully, however it is not used when training the model.

            CUDA version is 11.4 according to nvidia-smi, both inside and outside of the container, and I am using Ubuntu 20.04.

            ...

            ANSWER

            Answered 2022-Feb-15 at 20:58

            You may try the solution provided here with Docker https://github.com/google-coral/tutorials/issues/5#issuecomment-821860067 or use the GPU based colab tutorial from google-coral tutorials (https://github.com/google-coral/tutorials) to retrain an object detection model

            Source https://stackoverflow.com/questions/70747752

            QUESTION

            Output tensor from tflite interpreter is squeezed
            Asked 2022-Feb-04 at 14:57

            I'm trying to get a YOLOv5s model to run on a Coral EdgeTPU. Ive followed the instructions in the YOLOv5 repository for conversion from the yolov5s.pt model to the yolov5s-int8_edgetpu.tflite model.

            After cloning the pycoral repository, they provide a detect_image.py script. When using their model, the script executes with no errors.

            If I run the same script with my yolov5s-int8_edgetpu.tflite model I get this error:

            ...

            ANSWER

            Answered 2022-Feb-04 at 14:57

            Since the Yolov5s model has a different input file than the EfficientDet, the output tensor will be different. The trick here is understanding how to process this output tensor.

            Fortunately, Ultralytics/Yolov5 held an export competition where the goal was to execute Yolov5 models on EdgeTPU devices.

            This guy Josh won the coral devboard section. He wrote python library to process these wonky tensor outputs from Yolov5s models. Here is the repo. The real processing of the output tensor is done in his non-max-suppression code.

            I've forked his repo and added the ability to execute/process these Yolov5s models on desktops.

            Thanks so much Josh!

            Source https://stackoverflow.com/questions/70947940

            QUESTION

            "IndexError: index 10 is out of bounds for axis 0 with size 10" when running inference on Coral Dev Board
            Asked 2021-Dec-28 at 21:28

            I'm trying to run a quantized and Edge-TPU-compiled Tensorflow object detection model on a Coral Dev Board.

            My Code:

            ...

            ANSWER

            Answered 2021-Dec-28 at 21:28

            It seems to be a bug in the PyCoral API. To solve the issue, I replaced the last line from the "detect.py"-file (in my case located in "/usr/lib/python3/dist-packages/pycoral/adapters/detect.py") with this updated line:

            return [make(i) for i in range(len(scores)) if scores[i] >= score_threshold]

            Source https://stackoverflow.com/questions/70511953

            QUESTION

            Specific bounding box color
            Asked 2021-Nov-13 at 00:45

            Can someone help me to modify this existing code to use different color for the bounding box i want to detect? For example: If a person detect bounding box will be red and if animals or pets detect will be green and other object would be blue, been exploring for a week still no luck for modifying it if anyone can explain or help would be much appreciated. Thanks!

            ...

            ANSWER

            Answered 2021-Nov-11 at 05:45

            Basically what you want to do is make a dict where the key is the class and the value is a color in the same format that is here.

            cv2.rectangle(image, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)

            Replace (10, 255, 0) with something like color_dict[classes[i]] and then you will be able to get a different color for each class.

            Source https://stackoverflow.com/questions/69914551

            QUESTION

            Image classification using tensorflow lite without Google Coral USB
            Asked 2021-Jul-16 at 09:20

            I am trying to evaluate a Raspberry Pi performance with a Google Goral Edge TPU USB device and without it for an image classification task on a video file. I have managed to evaluate the peformance using the Edge TPU USB device already. However, when I try running a tensorflow lite code to run inference it gets me an error that tells me I need to plugin the device:

            ...

            ANSWER

            Answered 2021-Jul-16 at 09:20

            I recently came into this for a thesis supervision. We tested face detection in a raspberry pi 4 with Coral USB an without (inference on rpi CPU). Are you using the same model file for both? If this is the case, then this is the problem. You need to use the bare tflite model for the CPU inference and the TPU-compiled model for the inference with TPU. You can take a look at this repo where you can find the code I was mentioned before (it's not well documented but it's working, look at the inference CPU and inference CORAL files).

            Source https://stackoverflow.com/questions/68402726

            QUESTION

            Converting SSD object detection model to TFLite and quantize it from float to uint8 for EdgeTPU
            Asked 2021-May-21 at 10:11

            I am having problems converting a SSD object detection model into a uint8 TFLite for the EdgeTPU.

            As far as I know, I have been searching in different forums, stack overflow threads and github issues and I think I am following the right steps. Something must be wrong on my jupyter notebook since I can't achive my proposal.

            I am sharing with you my steps explained on a Jupyter Notebook. I think it will be more clear.

            ...

            ANSWER

            Answered 2021-May-04 at 08:17

            The process, as @JaesungChung answered is well done.

            My problem was on the application which was running the .tflite model. I quantized my model output to uint8, so I had to reescale my obtained values to get the right results.

            I.e. I had 10 objects because I was requesting all the detected objects with an score above 0.5. My results were no scaled, so the detected objects scores could be perfectly 104. I had to reescale that number dividing by 255.

            The same happened when graphing my results. So I had to divide that number and multiplicate by the height and width.

            Source https://stackoverflow.com/questions/67330342

            QUESTION

            tflite_runtime get Illegal instruction on raspberry pi
            Asked 2021-Apr-03 at 23:18

            after installing tflite_runtime on raspberry pi using the following commands

            ...

            ANSWER

            Answered 2021-Apr-03 at 23:18

            The prebuilt tflite_runtime package set from the above site does not cover armv6 architecture yet.

            Alternatively, you can choose some other options.

            (1) Install the TensorFlow pip package.

            TensorFlow Lite features are a part of TensorFlow package and the prebuilt TensorFlow pip packages support armv6. See https://www.tensorflow.org/install/pip

            (2) Build your own tflite_runtime through Bazel or CMake.

            If there is a need for installing the tflite_runtime only, it is possible to build the tflite_runtime by yourself. The following document describes the differences between Bazel and CMake and how to build the tflite_runtime through them.

            https://www.tensorflow.org/lite/guide/build_arm

            Source https://stackoverflow.com/questions/66932204

            QUESTION

            CUSTOM : Operation is working on an unsupported data type EDGETPU
            Asked 2021-Mar-30 at 06:00

            I am trying to retrain custom object detector model for Coral USB and follow coral ai tutorials from these link; https://coral.ai/docs/edgetpu/retrain-detection/#requirements

            After retrained ssd_mobilenet_v2 model, converting edge tpu models with edge tpu compiler. Compiler result are these ;

            Operator Count Status CUSTOM 1 Operation is working on an unsupported data type ADD 10 Mapped to Edge TPU LOGISTIC 1 Mapped to Edge TPU CONCATENATION 2 Mapped to Edge TPU RESHAPE 13 Mapped to Edge TPU CONV_2D 55 Mapped to Edge TPU DEPTHWISE_CONV_2D 17 Mapped to Edge TPU

            And visualize from netron ;

            "Custom" operator not mapped. All operations are mapped and worked on tpu but "custom" is working on cpu. I saw same operator in ssd_mobilenet_v1

            How i can convert all operators to edgetpu models? What is the custom operator ? ( you can find supported operators from here https://coral.ai/docs/edgetpu/models-intro/#supported-operations)

            ...

            ANSWER

            Answered 2021-Mar-30 at 06:00

            This is the correct output for a SSD model. The TFLite_Detection_PostProcess is the custom op that is not run on the EdgeTPU. If you run netron on one of our default SSD models on https://coral.ai/models/, you'll see the PostProcess runs on CPU in that case.

            In the case of your model, every part of the of the model has been successfully converted. The last stage (which takes the model output and converts it to various usable outputs) is a custom implementation in TFLite that is already optimized for speed but is generic compute, not TFLite ops that the EdgeTPU accelerates.

            Source https://stackoverflow.com/questions/66653597

            QUESTION

            Coral Edge TPU USB Accelerator driver fails to install
            Asked 2021-Jan-21 at 21:35

            I'm trying to install and test my Coral Edge TPU. I'm following the instructions here: https://coral.ai/docs/accelerator/get-started/

            The first step is to install drivers from the coral website, but I'm getting the following errors. I've tried running with and without admin, and uninstalling and installing again, but I get the same errors.

            Has anyone else run into this issue? I'm on Windows 10.

            ...

            ANSWER

            Answered 2021-Jan-18 at 17:04

            This is a bug in the coral software. According to this thread https://github.com/google-coral/edgetpu/issues/260 they messed up a few things in the newest version (at the time 2.5.0). Starting with a fresh virtual environment and using the release 2.1.0 and corresponding driver for Python 3.7 (3.8, 3.9 not supported as of 2.1.0) fixed the issue.

            From that thread:

            For now, I suggest rolling back to the older driver: https://dl.google.com/coral/edgetpu_api/edgetpu_runtime_20200728.zip

            And you also need to remove your current tflite_runtime and downgrade it to an older version (make sure to change to the right python version):

            pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp36-cp36m-win_amd64.whl pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-win_amd64.whl Apologies, we are working to get this fixed ASAP

            Source https://stackoverflow.com/questions/65605537

            QUESTION

            Cannot get USB to work on Raspberry-Pi Zero with Coral Edge TPU
            Asked 2020-Dec-15 at 12:17

            I have cross compiled libedgetpu for Raspi-0 and I can run a minimal C++ program. However it does not detect any TPU's. (The Coral TPU is connected via USB port to the Pi-0).

            ...

            ANSWER

            Answered 2020-Dec-15 at 12:17

            I figured out the answer, it was pretty stupid, the library has to be built as a shared library:

            Source https://stackoverflow.com/questions/65252662

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install edgetpu

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/google-coral/edgetpu.git

          • CLI

            gh repo clone google-coral/edgetpu

          • sshUrl

            git@github.com:google-coral/edgetpu.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link