Mask_RCNN | Mask R-CNN for object detection | Computer Vision library

 by   matterport Python Version: v2.1 License: Non-SPDX

kandi X-RAY | Mask_RCNN Summary

kandi X-RAY | Mask_RCNN Summary

Mask_RCNN is a Python library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Tensorflow, Keras applications. Mask_RCNN has no bugs, it has no vulnerabilities, it has build file available and it has medium support. However Mask_RCNN has a Non-SPDX License. You can install using 'pip install Mask_RCNN' or download it from GitHub, PyPI.

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              Mask_RCNN has a medium active ecosystem.
              It has 23124 star(s) with 11491 fork(s). There are 592 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 1832 open issues and 840 have been closed. On average issues are closed in 274 days. There are 115 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of Mask_RCNN is v2.1

            kandi-Quality Quality

              Mask_RCNN has 0 bugs and 0 code smells.

            kandi-Security Security

              Mask_RCNN has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              Mask_RCNN code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              Mask_RCNN has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              Mask_RCNN releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              Mask_RCNN saves you 1171 person hours of effort in developing the same functionality from scratch.
              It has 2642 lines of code, 139 functions and 8 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed Mask_RCNN and discovered the below as its top functions. This is intended to give you an instant insight into Mask_RCNN implemented functionality, and help decide if they suit your requirements.
            • Connects the model
            • Compute the shape of the image
            • Generate anchors for given image shape
            • Builds a FPN mask graph
            • Train the model
            • Prints a log message
            • Set trainable layers to trainable model
            • Compiles the Keras model
            • Load weights from a file
            • Train a model
            • Runs detection on a subset of images
            • Displays the difference between ground truth and ground truth
            • Run a keras computation graph
            • R Compute the suppression of a given threshold
            • Call detection on input image
            • Display a table of weights
            • Evaluate COCO images
            • Resize an image
            • Draw boxes
            • Draw random ROIs
            • Detect mold images
            • Calls the RNN
            • Call the image pyramid
            • Load COCO
            • Load mask from image
            • Runs the model on the given image or video
            Get all kandi verified functions for this library.

            Mask_RCNN Key Features

            No Key Features are available at this moment for Mask_RCNN.

            Mask_RCNN Examples and Code Snippets

            image-segmentation
            Pythondot img1Lines of Code : 26dot img1License : Permissive (MIT)
            copy iconCopy
              [Available segmentation models]
              Instance:
                'maskrcnn'
              Semantic:
                'fpn', 'linknet', 'pspnet', 'unet'
              
              [Available backbone architectures]
              MobileNet:
                'mobilenetv2', 'mobilenet' 
              DenseNet:
                'densenet121', 'densenet169', 'densen  
            copy iconCopy
            > download the front-end keras  mask_rcnn model and install it  https://github.com/matterport/Mask_RCNN  
            > download this https://github.com/parai/Mask_RCNN for converting keras model to tensorflow model 
            > 1.modify matterport's Mask_RCNN/sa  
            How to train your own MaskRCNN model on COCO dataset
            Pythondot img3Lines of Code : 10dot img3License : Permissive (MIT)
            copy iconCopy
              cd /path/to/image-segmentation/datasets
              ./download_coco.sh
            
              python coco_viewer.py -d=datasets/coco
            
              cd /path/to/image-segmentation
              mkdir -p plans/maskrcnn
              cp examples/configs/maskrcnn/*.cfg plans/maskrcnn
            
              python train.py -s plans/maskr  
            Mask_RCNN - coco
            Pythondot img4Lines of Code : 297dot img4License : Non-SPDX
            copy iconCopy
            """
            Mask R-CNN
            Configurations and data loading code for MS COCO.
            
            Copyright (c) 2017 Matterport, Inc.
            Licensed under the MIT License (see LICENSE for details)
            Written by Waleed Abdulla
            
            ------------------------------------------------------------
            
            Us  
            Mask_RCNN - nucleus
            Pythondot img5Lines of Code : 263dot img5License : Non-SPDX
            copy iconCopy
            """
            Mask R-CNN
            Train on the nuclei segmentation dataset from the
            Kaggle 2018 Data Science Bowl
            https://www.kaggle.com/c/data-science-bowl-2018/
            
            Licensed under the MIT License (see LICENSE for details)
            Written by Waleed Abdulla
            
            ---------------------  
            Mask_RCNN - balloon
            Pythondot img6Lines of Code : 181dot img6License : Non-SPDX
            copy iconCopy
            """
            Mask R-CNN
            Train on the toy Balloon dataset and implement color splash effect.
            
            Copyright (c) 2018 Matterport, Inc.
            Licensed under the MIT License (see LICENSE for details)
            Written by Waleed Abdulla
            
            ----------------------------------------------  

            Community Discussions

            QUESTION

            Mask to bounding box for narrow features (cracks)
            Asked 2022-Feb-24 at 11:18

            I am trying to adapt the code detailed here https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html for crack detection using the Kanha dataset (https://github.com/khanhha/crack_segmentation).

            My problem is that when converting masks to bounding boxes, some are empty - and thus invalid.

            ValueError: All bounding boxes should have positive height and width. Found invalid box

            Should I add a minimum area clause before appending boxes[], boxes.append([xmin, ymin, xmax, ymax])? Or does anyone have another idea how to make progress?

            ...

            ANSWER

            Answered 2022-Feb-24 at 11:18

            I have done exactly the same for a project and I believe the issue comes from detection that will have a width or a height of 1 pixel only. In this case, xmin = xmax or ymin = ymax which creates the issue.

            I just added 1 to my xmax and ymax to ensure boxes are never empty.

            Source https://stackoverflow.com/questions/71249434

            QUESTION

            NameError: name 'KE' is not defined
            Asked 2022-Feb-03 at 09:40

            I am following this tutorial: https://blog.paperspace.com/mask-r-cnn-in-tensorflow-2-0/ in order to train a custom dataset for object detection. When I run the code for training (under paragraph: "Train Mask R-CNN in TensorFlow 1.0"), I get this error on colab:

            ...

            ANSWER

            Answered 2022-Feb-03 at 09:40

            Ok, I tried this github repository instead the original MaskRCNN: https://github.com/akTwelve/Mask_RCNN with the latest tensorflow (2.7.0) + Keras (2.7.0) installed on colab. It seems to overcome the above problem I described...I do not know why..!

            Source https://stackoverflow.com/questions/70962296

            QUESTION

            fit_generator() returns NoneType instead of History object in Mask R CNN
            Asked 2022-Jan-27 at 01:21

            I would like to save the loss data while training my Mask R CNN, but I seem to be missing something. The training is working but I'm getting the Error:

            AttributeError: 'NoneType' object has no attribute 'history'

            ...

            ANSWER

            Answered 2022-Jan-26 at 23:58

            I believe that model.fit_generator is deprecated, in TensorFlow 2.2 and higher you can just use model.fit because this now supports generators.

            https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit_generator

            Source https://stackoverflow.com/questions/70871252

            QUESTION

            a['regions'] KeyError in balloon.py
            Asked 2021-Dec-06 at 15:27

            In the balloon.py file in Detectron2 samples, I get a KeyError of 'regions' whenever I run the balloon.py on my custom dataset. I figured something was wrong with the json file in the train folder, so I first used the latest VIA 3 and then VIA 2.0.0. Both jsons create the same KeyError.

            I compared the balloon's training VIA json to my training VIA json, and they have the same structure now, so I'm thinking it isn't a json issue anymore. Why would Python not be able to read a string as a key?

            Here's balloon.py: https://github.com/matterport/Mask_RCNN/blob/master/samples/balloon/balloon.py

            ...

            ANSWER

            Answered 2021-Dec-05 at 21:27

            You haven’t given us your JSON so it’s impossible to tell really, but scanning over the file in the link I don’t think this is you fault, line 117 of balloons.py is

            Source https://stackoverflow.com/questions/70238415

            QUESTION

            What is total_loss,loss_cls etc
            Asked 2021-Dec-02 at 12:49

            I want to train a custom dataset on using faster_rcnn or mask_rcnn with the Pytorch and Detectron2 .Everything works well but I wanted to know I want to know what are the results I have.

            ...

            ANSWER

            Answered 2021-Dec-02 at 12:49

            Those are metrics printed out at every iteration of the training loop. The most important ones are the loss values, but below are basic descriptions of them all (eta and iter are self-explanatory I think).

            total_loss: This is a weighted sum of the following individual losses calculated during the iteration. By default, the weights are all one.

            1. loss_cls: Classification loss in the ROI head. Measures the loss for box classification, i.e., how good the model is at labelling a predicted box with the correct class.

            2. loss_box_reg: Localisation loss in the ROI head. Measures the loss for box localisation (predicted location vs true location).

            3. loss_rpn_cls: Classification loss in the Region Proposal Network. Measures the "objectness" loss, i.e., how good the RPN is at labelling the anchor boxes as foreground or background.

            4. loss_rpn_loc: Localisation loss in the Region Proposal Network. Measures the loss for localisation of the predicted regions in the RPN.

            5. loss_mask: Mask loss in the Mask head. Measures how "correct" the predicted binary masks are.

              For more details on the losses (1) and (2), take a look at the Fast R-CNN paper and the code.

              For more details on the losses (3) and (4), take a look at the Faster R-CNN paper and the code.

              For more details on the loss (5), take a look at the Mask R-CNN paper and the code.

            time: Time taken by the iteration.

            data_time: Time taken by the dataloader in that iteration.

            lr: The learning rate in that iteration.

            max_mem: Maximum GPU memory occupied by tensors in bytes.

            Source https://stackoverflow.com/questions/70169219

            QUESTION

            Data annotation for mask rcnn
            Asked 2021-Nov-19 at 04:45

            Is it mandatory to annotate images using polygon shapes for mask rcnn? I read the https://github.com/matterport/Mask_RCNN and the research paper as well. It seems that matterport's implementation can take bounding box as well as polygon as annotations. Although I am not certain. So should I consider bounding box annotation for my dataset? or polygon annotation?

            Currently I have annotated some images using bounding box on Intel's CVAT.

            ...

            ANSWER

            Answered 2021-Nov-19 at 04:45

            If you have a look COCO dataset, you can see it has 2 types of annotation format - bounding box and mask(polygon). Therefore, Mast RCNN is to predict 3 outputs - Label prediction, Bounding box prediction, Mask prediction. So, if you want Semantic Segmentation, you should have the polygon annotations for your dataset, but if you want only object detection, bounding box annotations are enough.

            Source https://stackoverflow.com/questions/70029771

            QUESTION

            Running a mask-rcnn model on Flask server startup
            Asked 2021-Nov-05 at 14:26

            This is my current Flask code that works fine, it receives a POST request with the image from the client, runs it through the model (based on this GH: https://github.com/matterport/Mask_RCNN), and sends a masked image back to the client.

            However, it is loading the model from the Configuration file and loading the weights for each request, which takes ages. I want to load the model on server startup and the weights and pass that to the index function. I have tried the solutions from other questions, but with no luck. I wonder if it's because I am loading a model, and then weights, rather than just loading a single h5 model file?

            How do I load a file on initialization in a flask application Run code after flask application has started

            Flask app:

            ...

            ANSWER

            Answered 2021-Nov-05 at 14:26

            I solved this using the before_first_request decorator. Below is the general structure:

            Source https://stackoverflow.com/questions/69807102

            QUESTION

            Why is using jinga template returning data as none on a webpage using flask?
            Asked 2021-Sep-03 at 17:16

            I am trying to print basically a table to display data from a function I have called on flask on a webpage. I looked over Jinga templates and that is what I attempted to use, however to no avail.

            My code is attached below. result from my cv_acp file is what I am trying to display in a table form.

            Currently, my TSN_PIC returns result as follows:

            The input video frame is classified to be PlayingCello - 99.33 PlayingGuitar - 0.28 PlayingPiano - 0.16 BoxingSpeedBag - 0.10 StillRings - 0.06

            But I want to be able to display this on a web page using flask in a table format

            My code is as follows:

            cv_acp

            ...

            ANSWER

            Answered 2021-Sep-03 at 16:51

            If you're working in a terminal or in a Jupyter notebook, plt.show() does what you want. For a web page, not so much.

            You have a good start otherwise, based it seems on getting an uploaded image to display. So your challenge will be to either save the matplotlib image to disk before you generate the page, or to defer generating the image until it's requested by way of the , then somehow return the image bits from cv_acp_TSN_PIC_display_image instead of a path to the saved file.

            To do the former, plt.savefig('uploads/image.png') might be what you need, with the caveat that a fixed filename will break things badly as soon as you have multiple users hitting the app.

            To do the latter, see this question and its answer.

            Source https://stackoverflow.com/questions/69047610

            QUESTION

            Mask RCNN 1 class only
            Asked 2021-Jun-01 at 13:10

            I am looking to use only one class, person (along with BG, background), for the Mask RCNN object detection. I am using this link: https://github.com/matterport/Mask_RCNN to run the mask rcnn. Is there a specific way to complete this (editing specific files, creating an extra python file, or just by filtering selections from the class_names array)? Any direction or solution will be highly appreciated. Thank you

            ...

            ANSWER

            Answered 2021-Jan-20 at 15:36

            There is a balloon example made by the author of the github you linked which is very well written and contains only one class (balloons) you should follow this tutorial: https://engineering.matterport.com/splash-of-color-instance-segmentation-with-mask-r-cnn-and-tensorflow-7c761e238b46

            Source https://stackoverflow.com/questions/65810714

            QUESTION

            input_image_meta shape error while using pixellib custom trainig on images
            Asked 2021-May-23 at 15:20

            I am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link. Dataset:https://drive.google.com/drive/folders/1MjpDNZtzGRNxEtCDcTmrjUuB1ics_3Jk?usp=sharing the code which I used to make a custom model is

            ...

            ANSWER

            Answered 2021-May-23 at 15:20

            Okay, this error is solved, I went to the pixellib library and according to them, we need validation data too in order to run the model. So I added validation data, (just a few images) and the library is functioning perfectly.

            Sorry for the trouble.

            Source https://stackoverflow.com/questions/67356839

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install Mask_RCNN

            demo.ipynb Is the easiest way to start. It shows an example of using a model pre-trained on MS COCO to segment objects in your own images. It includes code to run object detection and instance segmentation on arbitrary images. train_shapes.ipynb shows how to train Mask R-CNN on your own dataset. This notebook introduces a toy dataset (Shapes) to demonstrate training on a new dataset. (model.py, utils.py, config.py): These files contain the main Mask RCNN implementation. inspect_data.ipynb. This notebook visualizes the different pre-processing steps to prepare the training data. inspect_model.ipynb This notebook goes in depth into the steps performed to detect and segment objects. It provides visualizations of every step of the pipeline. inspect_weights.ipynb This notebooks inspects the weights of a trained model and looks for anomalies and odd patterns.
            demo.ipynb Is the easiest way to start. It shows an example of using a model pre-trained on MS COCO to segment objects in your own images. It includes code to run object detection and instance segmentation on arbitrary images.
            train_shapes.ipynb shows how to train Mask R-CNN on your own dataset. This notebook introduces a toy dataset (Shapes) to demonstrate training on a new dataset.
            (model.py, utils.py, config.py): These files contain the main Mask RCNN implementation.
            inspect_data.ipynb. This notebook visualizes the different pre-processing steps to prepare the training data.
            inspect_model.ipynb This notebook goes in depth into the steps performed to detect and segment objects. It provides visualizations of every step of the pipeline.
            inspect_weights.ipynb This notebooks inspects the weights of a trained model and looks for anomalies and odd patterns.
            Run setup from the repository root directory. Download pre-trained COCO weights (mask_rcnn_coco.h5) from the releases page. (Optional) To train or test on MS COCO install pycocotools from one of these repos. They are forks of the original pycocotools with fixes for Python3 and Windows (the official repo doesn't seem to be active anymore).
            Clone this repository
            Install dependencies pip3 install -r requirements.txt
            Run setup from the repository root directory python3 setup.py install
            Download pre-trained COCO weights (mask_rcnn_coco.h5) from the releases page.
            (Optional) To train or test on MS COCO install pycocotools from one of these repos. They are forks of the original pycocotools with fixes for Python3 and Windows (the official repo doesn't seem to be active anymore). Linux: https://github.com/waleedka/coco Windows: https://github.com/philferriere/cocoapi. You must have the Visual C++ 2015 build tools on your path (see the repo for additional details)

            Support

            Contributions to this repository are welcome. Examples of things you can contribute:. You can also join our team and help us build even more projects like this one.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/matterport/Mask_RCNN.git

          • CLI

            gh repo clone matterport/Mask_RCNN

          • sshUrl

            git@github.com:matterport/Mask_RCNN.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link