yolact | fully convolutional model for real-time instance | Computer Vision library

 by   dbolya Python Version: Current License: MIT

kandi X-RAY | yolact Summary

kandi X-RAY | yolact Summary

yolact is a Python library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Pytorch applications. yolact has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. However yolact build file is not available. You can download it from GitHub.

A simple, fully convolutional model for real-time instance segmentation. This is the code for our papers:. YOLACT++'s resnet50 model runs at 33.5 fps on a Titan Xp and achieves 34.1 mAP on COCO's test-dev (check out our journal paper here). In order to use YOLACT++, make sure you compile the DCNv2 code. (See Installation).
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              yolact has a medium active ecosystem.
              It has 4681 star(s) with 1277 fork(s). There are 106 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 377 open issues and 385 have been closed. On average issues are closed in 60 days. There are 16 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of yolact is current.

            kandi-Quality Quality

              yolact has 0 bugs and 0 code smells.

            kandi-Security Security

              yolact has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              yolact code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              yolact is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              yolact releases are not available. You will need to build from source code and install.
              yolact has no build file. You will be need to create the build yourself to build the component from source.
              Installation instructions, examples and code snippets are available.
              yolact saves you 2727 person hours of effort in developing the same functionality from scratch.
              It has 5908 lines of code, 329 functions and 52 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed yolact and discovered the below as its top functions. This is intended to give you an instant insight into yolact implemented functionality, and help decide if they suit your requirements.
            • Train the model
            • Add elem
            • Replaces existing config with new values
            • Parse command line arguments
            • Log data to disk
            • Compute the intersection of two boxes
            • Forward computation
            • R Compute the difference between two parameters
            • Encodes the matched data
            • Perform the jaccard search
            • Apply the transform
            • Render the image
            • Compute validation loss
            • Optimizes bounding boxes
            • Prints the stats
            • Computes the intersection of two boxes
            • Logs session header
            • Plot a single entry
            • Encodes a set of matches
            • Convert a batch of input images into targets
            • Get all supported extensions
            • Forward forward computation
            • Forward the forward computation
            • Parse arguments
            • Draw a bar chart
            • Forward the convolution
            • Add log entries to the database
            • Create a layer from configuration
            Get all kandi verified functions for this library.

            yolact Key Features

            No Key Features are available at this moment for yolact.

            yolact Examples and Code Snippets

            Citation
            Pythondot img1Lines of Code : 20dot img1License : Permissive (Apache-2.0)
            copy iconCopy
            @article{gluoncvnlp2019,
              title={GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing},
              author={Guo, Jian and He, He and He, Tong and Lausen, Leonard and Li, Mu and Lin, Haibin and Shi, Xingjian and Wang, Chenguan  
            Installation,Compile tensorflow addon for DCNv2 support (YOLACT++)
            Jupyter Notebookdot img2Lines of Code : 18dot img2no licencesLicense : No License
            copy iconCopy
            # Only CUDA 10.1 Update 1 
            cd addons
            export TF_NEED_CUDA="1"
            
            # Set these if the below defaults are different on your system
            export TF_CUDA_VERSION="10.1"
            export TF_CUDNN_VERSION="7"
            export CUDA_TOOLKIT_PATH="/usr/local/cuda"
            export CUDNN_INSTALL_PAT  
            Train,Create TFRecord for training,(3) Usage
            Jupyter Notebookdot img3Lines of Code : 17dot img3no licencesLicense : No License
            copy iconCopy
            python train.py -tfrecord_train_dir 'path of TFRecord training files'
                            -tfrecord_val_dir 'path of TFRecord validation files'
                            -pretrained_checkpoints 'path to pre-trained checkpoints (if any)'
                            -label_map   

            Community Discussions

            QUESTION

            Training Yolact on Google Colab+ without timing out
            Asked 2022-Feb-14 at 01:38

            I want to train Yolact on a custom dataset using Google Colab+. Is it possible to train on Colab+ or does it time out to easily? Thank you!

            ...

            ANSWER

            Answered 2022-Feb-14 at 01:38

            Yes, you can train your model on Colab+. The problem is that Colab has a relatively short lifecycle compared with other cloud platforms such as AWS SageMaker or Google Cloud. I run the code below to extend a bit more such time.

            Source https://stackoverflow.com/questions/70957016

            QUESTION

            Deploy pytorch .pth model in a python script
            Asked 2021-Nov-24 at 06:22

            After successfully training my yolact model using a custom dataset I'm happy with the inference results outputted by eval.py using this command from anaconda terminal:

            ...

            ANSWER

            Answered 2021-Nov-20 at 08:35

            I will just write the pseudocode here for you.

            Step 1: Try loading the model using the lines starting from here and ending here

            Step 2: Use this function for evaluation. Instead of cv2.imread, you just need to send your frame

            Step 3: Follow this function to get the bounding boxes. Especially this line. Just trackback the 't' variable and you will get your bounding boxes.

            Hope it helps. Let me know if you need more clarification.

            Source https://stackoverflow.com/questions/70038604

            QUESTION

            TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first (Segmentation using yolact edge)
            Asked 2021-Jul-07 at 13:55

            I am running segmentation on yolact edge. I am trying to find coordinates of the minimu and maximum x and y pixel coordinated of the mask using my own algorithm. I am trying to convert the values of a tuple to numpy. However I am getting the follwoing errror

            ...

            ANSWER

            Answered 2021-Jul-07 at 12:27

            QUESTION

            How to build YOLACT++ using Docker?
            Asked 2021-May-20 at 14:04

            I have to build yolact++ in docker enviromment (i'm using sagemaker notebook). Like this

            ...

            ANSWER

            Answered 2021-May-18 at 09:46

            You should try setting your CUDA_HOME variable in your dockerfile like this :

            Source https://stackoverflow.com/questions/67570694

            QUESTION

            How to save information about the result of instance segmentation by YOLACT?
            Asked 2020-Nov-11 at 05:18

            Is there any way to save the detected categories, their number, MASK area, etc. to a TXT file or CSV file when performing instance segmentation using YOLACT?

            I’m using YOLACT (https://github.com/dbolya/yolact) to challenge instance segmentation. I was able to use eval.py to do an instance segmentation of my own data and save that image or video. However, what I really need is the class names and their numbers detected and classified by YOLACT's AI, and the area of ​​MASK. If we can output this information to a txt file or csv file, we can use YOLACT even more advanced.

            If I can achieve that by adding an option in eval.py or modifying the code, please teach me.

            Thank you.

            ...

            ANSWER

            Answered 2020-Nov-11 at 05:18

            You already have that information from the eval.py.

            This line in the eval.py gives you the information.

            Source https://stackoverflow.com/questions/64729199

            QUESTION

            Unable to Load Images to train model on Custom Datasets
            Asked 2020-Nov-04 at 06:58

            I have just stuck with Image Instance Segmentation for a while. I am trying to train the Yolact model for my custom data. Here is some brief information about what I have done so far

            1. I have annotated the image using labelme annotation tool
            2. I have converted annotation file for each (train & validation data) using labelme2coco -> train.json & test.json
            3. I made changes in the cofig.py file as needed and expected by yolact
            4. As I was following this repository I encountered an error Argument 'bb' has incorrect type to which I have solved with the approach stated in this closed issue

            After completing the above task I am stuck here with below-stated issue.

            ...

            ANSWER

            Answered 2020-Nov-04 at 06:58

            You mispelled your folder names :) YolaDataset needs to be renamed to YolactDataset

            Source https://stackoverflow.com/questions/64605776

            QUESTION

            When I run train.py with YOLACT, I get the error KeyError: 0
            Asked 2020-Oct-23 at 15:44

            I'm new to machine learning and program. Now I'm trying to develop YOLACT AI using my own data. However, when I run train.py, I get the following error and cannot learn. What can I do to overcome this error?`

            ...

            ANSWER

            Answered 2020-Oct-19 at 09:47

            Your class id in annotations.json should start from 1 not 0. If they are starting from 0, try this in config.py in your "my_custom_dataset" in label map add this

            Source https://stackoverflow.com/questions/64420059

            QUESTION

            Get the polygon coordinates of predicted output mask in YOLACT/YOLACT++
            Asked 2020-Oct-20 at 09:04

            I am using Yolact https://github.com/dbolya/yolact ,an instance segmentation algorithm which outputs the test image with a mask on the detected object. As the input images are given with the coordinates of polygons around the input classes in the annotations.json, I want to get an output like this. But I can't figure out how to extract the coordinates of those contours/polygons.

            As far as I understood from this script https://github.com/dbolya/yolact/blob/master/eval.py the output is list of tensors for detected objects. It contains classes, scores, boxes and mask for evaluated image. The eval.py script returns recognized image with all this information. Recognition is saved in 'preds' in evalimg function (line 595), and post-processing of predict result is in the "def prep_display" (line 135)

            Now how do I extract those polygon coordinates and save it in .JSON file or whatever else?

            I also tried to look at these but couldn't figure out sadly! https://github.com/dbolya/yolact/issues/286 and https://github.com/dbolya/yolact/issues/256

            ...

            ANSWER

            Answered 2020-Oct-20 at 09:04

            You need to create a complete post-processing pipeline that is specific to your task. Here's small pseudocode that could be added to the prep_disply() in eval.py

            Source https://stackoverflow.com/questions/64440857

            QUESTION

            Predict only one class (person) in YOLACT/YOLACT++
            Asked 2020-Sep-29 at 08:47

            I want to predict only one class i.e. person from all the 84 classes that are being checked for and predicted.

            For YOLACT reference https://github.com/dbolya/yolact

            The results are pretty fine but I guess I just need to modify one of the codes and in a very short way but I cant manage to find out

            There is one issue related to this in which I did what he mentioned like adding the 4 lines in Yolact/layers/output_utils.py and changing nothing else. Those lines are as following:

            ...

            ANSWER

            Answered 2020-Sep-29 at 04:51

            In order to show a single class (person, id:0) output at the time of inference, you simply need to add

            Source https://stackoverflow.com/questions/64104148

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install yolact

            Clone this repository and enter it: git clone https://github.com/dbolya/yolact.git cd yolact
            Set up the environment using one of the following methods: Using Anaconda Run conda env create -f environment.yml Manually with pip Set up a Python3 environment (e.g., using virtenv). Install Pytorch 1.0.1 (or higher) and TorchVision. Install some other packages: # Cython needs to be installed before pycocotools pip install cython pip install opencv-python pillow pycocotools matplotlib
            If you'd like to train YOLACT, download the COCO dataset and the 2014/2017 annotations. Note that this script will take a while and dump 21gb of files into ./data/coco. sh data/scripts/COCO.sh
            If you'd like to evaluate YOLACT on test-dev, download test-dev with this script. sh data/scripts/COCO_test.sh
            If you want to use YOLACT++, compile deformable convolutional layers (from DCNv2). Make sure you have the latest CUDA toolkit installed from NVidia's Website. cd external/DCNv2 python setup.py build develop

            Support

            YOLACT now supports multiple GPUs seamlessly during training:.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/dbolya/yolact.git

          • CLI

            gh repo clone dbolya/yolact

          • sshUrl

            git@github.com:dbolya/yolact.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link