gluon-cv | | Installation | Documentation | Tutorials | Machine Learning library

 by   dmlc Python Version: v0.10.0 License: Apache-2.0

kandi X-RAY | gluon-cv Summary

kandi X-RAY | gluon-cv Summary

gluon-cv is a Python library typically used in Telecommunications, Media, Media, Entertainment, Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. gluon-cv has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install gluon-cv' or download it from GitHub, PyPI.

| Installation | Documentation | Tutorials |. GluonCV provides implementations of the state-of-the-art (SOTA) deep learning models in computer vision.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              gluon-cv has a medium active ecosystem.
              It has 5552 star(s) with 1199 fork(s). There are 154 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 42 open issues and 778 have been closed. On average issues are closed in 93 days. There are 18 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of gluon-cv is v0.10.0

            kandi-Quality Quality

              gluon-cv has 0 bugs and 0 code smells.

            kandi-Security Security

              gluon-cv has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              gluon-cv code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              gluon-cv is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              gluon-cv releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              gluon-cv saves you 52114 person hours of effort in developing the same functionality from scratch.
              It has 64838 lines of code, 3352 functions and 548 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed gluon-cv and discovered the below as its top functions. This is intended to give you an instant insight into gluon-cv implemented functionality, and help decide if they suit your requirements.
            • Train the Gluon network .
            • Load a COCO .
            • Export the given block to TVM .
            • Compute the direct loss for each class .
            • Train image classification .
            • Overlay instances .
            • Convert dataset dict to COCO format .
            • Autocomplete .
            • Fit the model .
            • Export a block of data
            Get all kandi verified functions for this library.

            gluon-cv Key Features

            No Key Features are available at this moment for gluon-cv.

            gluon-cv Examples and Code Snippets

            gluon-cv - train mask rcnn coco
            Pythondot img1Lines of Code : 156dot img1License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """2. Train Mask RCNN end-to-end on MS COCO
            ===========================================
            
            This tutorial goes through the steps for training a Mask R-CNN [He17]_ instance segmentation model
            provided by GluonCV.
            
            Mask R-CNN is an extension to the Faster  
            gluon-cv - train faster rcnn voc
            Pythondot img2Lines of Code : 124dot img2License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """06. Train Faster-RCNN end-to-end on PASCAL VOC
            ================================================
            
            This tutorial goes through the basic steps of training a Faster-RCNN [Ren15]_ object detection model
            provided by GluonCV.
            
            Specifically, we show how t  
            gluon-cv - train ssd voc
            Pythondot img3Lines of Code : 109dot img3License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """04. Train SSD on Pascal VOC dataset
            ======================================
            
            This tutorial goes through the basic building blocks of object detection
            provided by GluonCV.
            Specifically, we show how to build a state-of-the-art Single Shot Multibox
            De  
            copy iconCopy
            def find_class_idx(label):
            """
            Should return the class index of a particular label.
            
            :param label: label of class
            :type label: str
            
            :return: class index
            :rtype: int
            """
            
            return network.classes.index(label)
            
            Get Class Label in Faster -RCNN with gluoncv
            Pythondot img5Lines of Code : 10dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            # map class ID to classes
            id2string = [i:name for i, name in enumerate(net.classes)]
            
            # filter on score.
            thresh = 0.8
            top_classIDs = [c for c, s in zip(box_ids[0], scores[0]) if s > thresh]
            
            # convert IDs to class names into "label1"
            la
            Python virtualenv setuptools package issue
            Pythondot img6Lines of Code : 2dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            pip install --upgrade 'setuptools<45.0.0'
            
            GluonCV - Use GPU for inference in object detection
            Pythondot img7Lines of Code : 11dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            rgb_nd = rgb_nd.as_in_context(ctx)
            class_IDs, scores, bounding_boxes = net(rgb_nd)
            
                class_IDs, scores, bounding_boxes = net(rgb_nd)
                if isinstance(class_IDs, mx.ndarray.ndarray.NDArray):
                    class_IDs.wai
            Openpose on low resolution images?
            Pythondot img8Lines of Code : 25dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            from matplotlib import pyplot as plt
            from gluoncv import model_zoo, data, utils
            from gluoncv.data.transforms.pose import detector_to_alpha_pose, heatmap_to_coord_alpha_pose
            
            detector = model_zoo.get_model('yolo3_mobilenet1.0_coco', pretrai
            Reduce the size of fasterRCNN array output, using Gluon, python
            Pythondot img9Lines of Code : 2dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            net.set_nms(nms_thresh=0.5, nms_topk=50)
            
            How can I use the gluon-cv model_zoo and output to an OpenCV window with Python?
            Pythondot img10Lines of Code : 41dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            #!/usr/bin/python3
            # 2019/01/24 09:05
            # 2019/01/24 10:25
            
            import gluoncv as gcv
            import mxnet as mx
            import cv2
            import numpy as np
            # https://github.com/pjreddie/darknet/blob/master/data/dog.jpg
            
            ## (1) Create network 
            net = gcv.model_zoo.get

            Community Discussions

            QUESTION

            GluonCV inference with finetuned model - “Please make sure source and target networks have the same prefix” error
            Asked 2020-Aug-24 at 13:46

            I used GluonCV to finetune an object detection model in order to recognize some custom classes, mostly following the related tutorial.

            I tried using both “ssd_512_resnet50_v1_coco” and “ssd_512_mobilenet1.0_coco” as base models, and the training process ended successfully (the accuracy on the validation dataset is reasonably high).

            The problem is, I tried running inference with the newly trained model, by using for example:

            ...

            ANSWER

            Answered 2020-Aug-24 at 13:46

            Ok, fixed it. Basically, during training I was saving the .params file by using:

            Source https://stackoverflow.com/questions/63468051

            QUESTION

            How to make inference on local PC with the model trained on AWS SageMaker by using the built-in algorithm Semantic Segmentation?
            Asked 2020-Mar-02 at 05:15

            Similar to the issue of The trained model can be deployed on the other platform without dependency of sagemaker or aws service?.

            I have trained a model on AWS SageMaker by using the built-in algorithm Semantic Segmentation. This trained model named as model.tar.gz is stored on S3. So I want to download this file from S3 and then use it to make inference on my local PC without using AWS SageMaker anymore. Since the built-in algorithm Semantic Segmentation is built using the MXNet Gluon framework and the Gluon CV toolkit, so I try to refer the documentation of mxnet and gluon-cv to make inference on local PC.

            It's easy to download this file from S3, and then I unzip this file to get three files:

            1. hyperparams.json: includes the parameters for network architecture, data inputs, and training. Refer to Semantic Segmentation Hyperparameters.
            2. model_algo-1
            3. model_best.params

            Both model_algo-1 and model_best.params are the trained models, and I think it's the output from net.save_parameters (Refer to Train the neural network). I can also load them with the function mxnet.ndarray.load.

            Refer to Predict with a pre-trained model. I found there are two necessary things:

            1. Reconstruct the network for making inference.
            2. Load the trained parameters.

            As for reconstructing the network for making inference, since I have used PSPNet from training, so I can use the class gluoncv.model_zoo.PSPNet to reconstruct the network. And I know how to use some services of AWS SageMaker, for example batch transform jobs, to make inference. I want to reproduce it on my local PC. If I use the class gluoncv.model_zoo.PSPNet to reconstruct the network, I can't make sure whether the parameters for this network are same those used on AWS SageMaker while making inference. Because I can't see the image 501404015308.dkr.ecr.ap-northeast-1.amazonaws.com/semantic-segmentation:latest in detail.

            As for loading the trained parameters, I can use the load_parameters. But as for model_algo-1 and model_best.params, I don't know which one I should use.

            ...

            ANSWER

            Answered 2020-Mar-02 at 05:15

            The following code works well for me.

            Source https://stackoverflow.com/questions/60405600

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install gluon-cv

            GluonCV is built on top of MXNet and PyTorch. Depending on the individual model implementation(check model zoo for the complete list), you will need to install either one of the deep learning framework. Of course you can always install both for the best coverage. Please also check installation guide for a comprehensive guide to help you choose the right installation command for your environment.
            GluonCV supports Python 3.6 or later. The easiest way to install is via pip.
            GluonCV supports Python 3.6 or later. The easiest way to install is via pip.

            Support

            Image Classification: recognize an object in an image. 50+ models, including ResNet, MobileNet, DenseNet, VGG, ... Object Detection: detect multiple objects with their bounding boxes in an image. Faster RCNN, SSD, Yolo-v3. Semantic Segmentation: associate each pixel of an image with a categorical label. FCN, PSP, ICNet, DeepLab-v3, DeepLab-v3+, DANet, FastSCNN. Instance Segmentation: detect objects and associate each pixel inside object area with an instance label. Pose Estimation: detect human pose from images. Video Action Recognition: recognize human actions in a video. MXNet: TSN, C3D, I3D, I3D_slow, P3D, R3D, R2+1D, Non-local, SlowFast PyTorch: TSN, I3D, I3D_slow, R2+1D, Non-local, CSN, SlowFast, TPN. Depth Prediction: predict depth map from images. GAN: generate visually deceptive images. Person Re-ID: re-identify pedestrians across scenes.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link