fast-rcnn | Fast R-CNN is a fast framework | Machine Learning library

 by   rbgirshick Python Version: Current License: Non-SPDX

kandi X-RAY | fast-rcnn Summary

kandi X-RAY | fast-rcnn Summary

fast-rcnn is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. fast-rcnn has no bugs, it has no vulnerabilities and it has medium support. However fast-rcnn build file is not available and it has a Non-SPDX License. You can download it from GitHub.

Fast R-CNN is a fast framework for object detection with deep ConvNets. Fast R-CNN. Fast R-CNN was initially described in an arXiv tech report and later published at ICCV 2015.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              fast-rcnn has a medium active ecosystem.
              It has 3199 star(s) with 1547 fork(s). There are 196 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 105 open issues and 65 have been closed. On average issues are closed in 119 days. There are 10 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of fast-rcnn is current.

            kandi-Quality Quality

              fast-rcnn has 0 bugs and 0 code smells.

            kandi-Security Security

              fast-rcnn has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              fast-rcnn code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              fast-rcnn has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              fast-rcnn releases are not available. You will need to build from source code and install.
              fast-rcnn has no build file. You will be need to create the build yourself to build the component from source.
              Installation instructions, examples and code snippets are available.
              fast-rcnn saves you 748 person hours of effort in developing the same functionality from scratch.
              It has 1725 lines of code, 112 functions and 24 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed fast-rcnn and discovered the below as its top functions. This is intended to give you an instant insight into fast-rcnn implemented functionality, and help decide if they suit your requirements.
            • Add bounding boxes to the ROI table
            • Compute the ground - truth targets for the ground - truth image
            • List of roidb
            • Set the roidb handler
            • Load configuration from file
            • Recursively merge config into b
            • Parse command line arguments
            • Prepare roidb data
            • Create a config dictionary from a list
            • Append flipped images
            • Create a selective search using selective search
            • Get an imdb dataset
            • Adds a path to sys path
            • Set competition mode
            Get all kandi verified functions for this library.

            fast-rcnn Key Features

            No Key Features are available at this moment for fast-rcnn.

            fast-rcnn Examples and Code Snippets

            Popular issues
            C++dot img1Lines of Code : 25dot img1License : Non-SPDX (NOASSERTION)
            copy iconCopy
            "Unknown layer type: Python"
            
            WITH_PYTHON_LAYER := 1
            
            cd $ROOT_DIR/py-faster-rcnn/caffe-fast-rcnn/
            make clean
            make -j8 && make pycaffe
            
            fatal error: caffe/proto/caffe.pb.h: No such file or directory
            #include "caffe/proto/caffe.pb.h"
                       
            Installation (sufficient for the demo)
            Pythondot img2Lines of Code : 23dot img2License : Non-SPDX (NOASSERTION)
            copy iconCopy
            # Make sure to clone with --recursive
            git clone --recursive https://github.com/cguindel/lsi-faster-rcnn.git
            
            git submodule update --init --recursive
            
            cd $FRCN_ROOT/lib
            gedit setup.py
            
            extra_compile_args={'gcc': ["-Wno-unused-function"],
                           
            Mask-RCNN Sushi Dish Detection,Steps,5. Run the Training
            Pythondot img3Lines of Code : 8dot img3License : Permissive (MIT)
            copy iconCopy
            export SERVER_NAME=virginia-dl ## or SERVER_NAME=ubuntu@ip-address
            ## copy files to server (dish data, pre trainned h5 file)
            scp -r dish_server/* ${SERVER_NAME}:/home/ubuntu
            ssh ${SERVER_NAME}
            
            (server) > tmux new -s train
            (server - tmux) > sou  

            Community Discussions

            QUESTION

            what is the meaning of 'per-layer learning rate' in Fast R-CNN paper?
            Asked 2021-Oct-01 at 17:46

            I'm reading a paper about Fast-RCNN model.

            In the paper section 2.3 part of 'SGD hyper-parameters', it said that All layers use a per-layer learning rate of 1 for weights and 2 for biases and a global learning rate of 0.001


            Is 'per-layer learning rate' same as 'layer-specific learning rate' that give different learning rate by layers? If so, I can't understand how they('per-layer learning rate' and 'global learning rate') can be apply at the same time?


            I found the example of 'layer-specific learning rate' in pytorch.

            ...

            ANSWER

            Answered 2021-Oct-01 at 17:46

            The per-layer terminology in that paper is slightly ambiguous. They aren't referring to the layer-specific learning rates.

            All layers use a per-layer learning rate of 1 for weights and 2 for biases and a global learning rate of 0.001.

            The concerned statement is w.r.t. Caffe framework in which Fast R-CNN was originally written (github link).

            They meant that they're setting the learning rate multiplier of weights and biases to be 1 and 2 respectively.

            Check any prototxt file in the repo e.g. CaffeNet/train.prototxt.

            Source https://stackoverflow.com/questions/69406476

            QUESTION

            how to resize ground truth boxes in fast-rcnn
            Asked 2021-May-31 at 12:20

            fast rcnn is an algorithm for object detection in images, in which we feed to neural network an image and it output for us a list of objects and its categories within the image based on list of bounding boxes called "ground truth boxes". the algorithm compare the ground truth boxes with the boxes generated by the fast-rcnn algorithm and only keep those that sufficiently overlapped with the gt boxes. the problem here that we must resize the image to be fed into CNN, my question is, should us resize also the ground truth boxes before the comparaison step, and how to do that? tanks to reply.

            ...

            ANSWER

            Answered 2021-May-31 at 12:20

            If the bounding boxes are relative, you don't need to change them because 0.2 of the old height is the same as 0.2 of the new height and so on.

            Source https://stackoverflow.com/questions/67772139

            QUESTION

            my own implementation of FastRCNN cannot perform well on balanced data
            Asked 2020-Jun-09 at 06:06

            2020.06.09

            There are 700 images for training, each of them extract 64 rois and make a mini-batch, when batch-size is set to 2, it cast 350 steps to complete training, but for RCNN, each target is extracted as a single image resized to 224*224, there will be 64*700=44800 images, each of which contains more information and features than a 7*7 pooled feature map and I guess that's why it seems under-fitting though RCNN could be train well on same data.

            ==========================================================================

            Use fully balanced data, and acc drops to 0.53 (training data)

            ...

            ANSWER

            Answered 2020-Jun-09 at 06:06

            Damn, now I know what problem it is:

            In ROI_Pooling.py:

            Source https://stackoverflow.com/questions/62239980

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install fast-rcnn

            We'll call the directory that you cloned Fast R-CNN into FRCN_ROOT.
            Clone the Fast R-CNN repository
            We'll call the directory that you cloned Fast R-CNN into FRCN_ROOT Ignore notes 1 and 2 if you followed step 1 above. Note 1: If you didn't clone Fast R-CNN with the --recursive flag, then you'll need to manually clone the caffe-fast-rcnn submodule: git submodule update --init --recursive Note 2: The caffe-fast-rcnn submodule needs to be on the fast-rcnn branch (or equivalent detached state). This will happen automatically if you follow these instructions.
            Build the Cython modules cd $FRCN_ROOT/lib make
            Build Caffe and pycaffe cd $FRCN_ROOT/caffe-fast-rcnn # Now follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html # If you're experienced with Caffe and have all of the requirements installed # and your Makefile.config in place, then simply do: make -j8 && make pycaffe
            Download pre-computed Fast R-CNN detectors cd $FRCN_ROOT ./data/scripts/fetch_fast_rcnn_models.sh This will populate the $FRCN_ROOT/data folder with fast_rcnn_models. See data/README.md for details.
            Pre-computed selective search boxes can also be downloaded for VOC2007 and VOC2012. This will populate the $FRCN_ROOT/data folder with selective_selective_data.
            Pre-trained ImageNet models can be downloaded for the three networks described in the paper: CaffeNet (model S), VGG_CNN_M_1024 (model M), and VGG16 (model L). These models are all available in the Caffe Model Zoo, but are provided here for your convenience.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/rbgirshick/fast-rcnn.git

          • CLI

            gh repo clone rbgirshick/fast-rcnn

          • sshUrl

            git@github.com:rbgirshick/fast-rcnn.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link