faster-rcnn | some tricks on faster-rcnn to improve performance | Machine Learning library

 by   chenzx921020 Python Version: Current License: No License

kandi X-RAY | faster-rcnn Summary

kandi X-RAY | faster-rcnn Summary

faster-rcnn is a Python library typically used in Telecommunications, Media, Advertising, Marketing, Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. faster-rcnn has no bugs, it has no vulnerabilities and it has low support. However faster-rcnn build file is not available. You can download it from GitHub.

There exist good implementations of Faster R-CNN yet they lack support for recent ConvNet architectures. The aim of reproducing it from scratch is to fully utilize MXNet engines and parallelization for object detection. [1] On Ubuntu 14.04.5 with device Titan X, cuDNN enabled. The experiment is VGG-16 end-to-end training. [2] VGG network. Trained end-to-end on VOC07trainval+12trainval, tested on VOC07 test. [3] VGG network. Fast R-CNN is the most memory expensive process. [4] VGG network (parallelization limited by bandwidth). ResNet-101 speeds up from 2 img/s to 3.5 img/s. [5] py-faster-rcnn does not support ResNet or recent caffe version.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              faster-rcnn has a low active ecosystem.
              It has 6 star(s) with 0 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              faster-rcnn has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of faster-rcnn is current.

            kandi-Quality Quality

              faster-rcnn has 0 bugs and 198 code smells.

            kandi-Security Security

              faster-rcnn has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              faster-rcnn code analysis shows 0 unresolved vulnerabilities.
              There are 7 security hotspots that need review.

            kandi-License License

              faster-rcnn does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              faster-rcnn releases are not available. You will need to build from source code and install.
              faster-rcnn has no build file. You will be need to create the build yourself to build the component from source.
              Installation instructions are available. Examples and code snippets are not available.
              It has 4658 lines of code, 273 functions and 53 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed faster-rcnn and discovered the below as its top functions. This is intended to give you an instant insight into faster-rcnn implemented functionality, and help decide if they suit your requirements.
            • Forward computation
            • Wrapper for cpu_nms
            • Clips boxes
            • Wrapper for cpu_nms
            • Get a train layer
            • Get convolutional convolution
            • LSC convolutional convolution
            • Train an rpn model
            • Train R -NN
            • Train an RPN model
            • Train RNN model
            • Filter roid bg roi
            • Calculate ground - truth image
            • Locate the CUDA module
            • Download one or more images
            • Parse command line arguments
            • Get the network
            • Perform selective search
            • Get R -NN test
            • Get the RPN layer
            • Train the R -NN model
            • Get test for vgg
            • Get resnet
            • Show a set of anns
            • Create a VGG network
            • Get VGG test
            • Get resnet test
            • Generate test boxes
            Get all kandi verified functions for this library.

            faster-rcnn Key Features

            No Key Features are available at this moment for faster-rcnn.

            faster-rcnn Examples and Code Snippets

            No Code Snippets are available at this moment for faster-rcnn.

            Community Discussions

            QUESTION

            Failing During Training MobileNetSSD Object Detection on a Custom Dataset Google Colab
            Asked 2022-Apr-07 at 16:25

            I'm following a Google Colab guide from Roboflow to train the MobileNetSSD Object detection model from Tensorflow on a custom dataset. Here is the link to the colab guide: https://colab.research.google.com/drive/1wTMIrJhYsQdq_u7ROOkf0Lu_fsX5Mu8a

            The data set is the example set from the Roboflow website called "Chess sample" which everyone who registers an account on the website gets in their workspace folder. Here is the link to get that setup: https://blog.roboflow.com/getting-started-with-roboflow/

            When following the Colab all steps are running completely fine until the step "Train the model". The following message is printed:

            ...

            ANSWER

            Answered 2022-Apr-07 at 16:25

            Yes, indeed - downgrading numpy will solve the issue - we saw this same bug in the Roboflow Faster RCNN tutorial. These new installs are now present in the MobileNet SSD Roboflow tutorial notebook.

            Source https://stackoverflow.com/questions/71780212

            QUESTION

            Not able to switch off batch norm layers for faster-rcnn (PyTorch)
            Asked 2022-Mar-18 at 15:15

            I'm trying to switch off batch norm layers in a faster-rcnn model for evaluation mode.

            I'm doing a sanity check atm:

            ...

            ANSWER

            Answered 2022-Mar-18 at 15:15

            So, after further investigation and after printing out all modules provided by the faster-rcnn, instead of BatchNorm2d, FrozenBatchNorm2d is used by the pretained model.

            Furthermore, unlike what's currently stated by the documentation, you must call torchvision.ops.misc.FrozenBatchNorm2d instead of torchvision.ops.FrozenBatchNorm2d.

            Additionally, as the layers are already frozen, there is no need to "switch off" these layers thus model.eval() is probably not required.

            Source https://stackoverflow.com/questions/71521735

            QUESTION

            Unable to load pre-trained model checkpoint with TensorFlow Object Detection API
            Asked 2021-Apr-17 at 10:33

            Similar to this question:

            Where can I find model.ckpt in faster_rcnn_resnet50_coco model? (this solution doesn't work for me)

            I have downloaded the ssd_resnet152_v1_fpn_1024x1024_coco17_tpu-8 with the intention of using it as a starting point. I am using the sample model configuration associated with that model in the TF model zoo.

            I am only changing the num classes and paths for tuning, training and eval.

            With:

            ...

            ANSWER

            Answered 2021-Apr-17 at 10:33

            Try changing the fine_tune_checkpoint path in the config file to something like path_to_folder/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0

            And in your training command, set the model_dir flag to just point to the model directory, don't include training, kind of like --model_dir=/ssd_resnet152_v1_fpn_1024x1024_coco17_tpu-8

            Source

            Just change the backslashes to forward-slashes, since you're on windows

            Source https://stackoverflow.com/questions/67118189

            QUESTION

            How to train faster-rcnn on dataset including negative data in pytorch
            Asked 2021-Feb-05 at 12:09

            I am trying to train the torchvision Faster R-CNN model for object detection on my custom data. I used the code in torchvision object detection fine-tuning tutorial. But getting this error:

            ...

            ANSWER

            Answered 2021-Feb-05 at 12:09

            We need to make two changes to the Dataset Class.

            1- Empty boxes are fed as:

            Source https://stackoverflow.com/questions/66063046

            QUESTION

            What's the function of “keep_aspect_ratio_resizer {” in the config file of Tensorflow Object Detection API?
            Asked 2021-Jan-25 at 09:26

            I use the Tensorflow Object Detection API to create an AI for Faster-RCNN. GitHub:Tensorflow/models

            What kind of resizing function does "keep_aspect_ratio_resizer {" in the config file have?

            I prepared images of 1920 x 1080 pixels and set "min dimension:" and "max dimension:" described immediately after "keep_aspect_ratio_resizer {" in the config file to 768 respectively.

            In this case, the 1920x1080 pixel image would be resized to 768x768 pixels and input to the CNN. At this time, will the original ratio of the image (16: 9) be maintained? Namely, when the image is resized to 768x768 pixels, will the long sides of the image be converted to 768 pixels and black bars will be added in the margin of the image?

            Or does the image ratio change from 16: 9 to 1: 1 and become contort when this setting?

            If anyone knows about this, please let me know.

            Thank you!

            ...

            ANSWER

            Answered 2021-Jan-25 at 09:26

            The definition of the different fields of the configuration files can be seen following this link: https://github.com/tensorflow/models/tree/master/research/object_detection/protos

            The keep_aspect_ratio_resizer field is in image_resizer.proto and state the following:

            Source https://stackoverflow.com/questions/65877638

            QUESTION

            Computing gradients of a multi-output model in Keras giving conversion to Tensorflow DType error
            Asked 2020-Sep-10 at 02:05

            I have a multi-output model in Keras (18 outputs to be precise), with a loss function for each output. I am trying to mimick the Region Proposal Network in faster-RCNN. Before training I want to make sure the gradients of my model are in order, where I have a snippet as follows:

            ...

            ANSWER

            Answered 2020-Sep-10 at 02:05

            The issue I was having is that the return value described here :

            Return Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

            is not a tensor. Similarly model.predict() will not work since the result is a numpy array, breaking the gradient computation. To compute the gradients the loop works if I instead simply call the model on test input data, and then compute the loss function with respect to ground truth values, a.k.a

            Source https://stackoverflow.com/questions/63749697

            QUESTION

            Weight & Biases Detectron2 Google Colab - wandb: ERROR Unable to log event [Errno 95] Operation not supported
            Asked 2020-Sep-07 at 21:31

            I am training a Faster-RCNN model by Detectron2 in Google Colab. I would like to track my experiments with Weights and Biases (WandB).

            My dataset is uploaded to Google Drive and mounted to the session via:

            ...

            ANSWER

            Answered 2020-Sep-07 at 21:31

            After one week the problem disappeared. I assume that someone must have fixed the bug that caused this issue. I can now use:

            Source https://stackoverflow.com/questions/63661337

            QUESTION

            Encoded and decoded version of bouding box regression offsets are different
            Asked 2020-Aug-21 at 08:23

            I'm trying to replicate bounding box regression technique used in faster-rcnn as given here. I've made a decoding fuunction and an encoding function. Ideally, when passing a bounding box to the encoder and then decoding it, I should get the same bounding box.

            Here, are my input bounding boxes:

            ...

            ANSWER

            Answered 2020-Aug-21 at 08:23

            The problem was in my decode function in calculating [x_min, y_min, x_max, y_max]. It should have been like this:

            Source https://stackoverflow.com/questions/63434199

            QUESTION

            Pytorch Faster R-CNN size mismatch errors in testing
            Asked 2020-Jun-08 at 03:36

            there!

            When running test_net.py in pytorch1.0 Faster R-CNN and demo.py on coco dataset with faster_rcnn_1_10_9771.pth(the pretrained resnet101 model on coco dataset provided by jwyang), I encounter the same errors below :

            ...

            ANSWER

            Answered 2020-Jun-08 at 03:36

            It says your model doesn't fit the pre-trained parameters you want to load.

            Maybe check the model you're using and the .pth file and find out if they match or what.

            Or post the code of your model and let's see what's going wrong.

            Source https://stackoverflow.com/questions/62247674

            QUESTION

            Index out of range error while training dataset
            Asked 2020-Mar-26 at 03:02

            I am trying to train MaskRCNN to detect and segment apples using the dataset from this paper,

            github link to code being used

            I am simply following the instructions as provided in the ReadMe file..

            Here is the output on console

            ...

            ANSWER

            Answered 2020-Mar-11 at 18:39

            Error is telling every thing you are trying to access an index from list self.masks which does not exist issue is in this line mask_path = os.path.join(self.root_dir, "masks", self.masks[idx]). You need to check the value of idx every time it is being passed only then you can figure out the problem

            Source https://stackoverflow.com/questions/60627275

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install faster-rcnn

            See if bash script/additional_deps.sh will do the following for you. Command line arguments have the same meaning as in mxnet/example/image-classification.
            Suppose HOME represents where this file is located. All commands, unless stated otherwise, should be started from HOME.
            Install python package cython easydict matplotlib scikit-image.
            Install MXNet version v0.9.5 or higher and MXNet Python Interface. Open python type import mxnet to confirm.
            Run make in HOME.
            prefix refers to the first part of a saved model file name and epoch refers to a number in this file name. In model/vgg-0000.params, prefix is "model/vgg" and epoch is 0.
            begin_epoch means the start of your training process, which will apply to all saved checkpoints.
            Remember to turn off cudnn auto tune. export MXNET_CUDNN_AUTOTUNE_DEFAULT=0.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/chenzx921020/faster-rcnn.git

          • CLI

            gh repo clone chenzx921020/faster-rcnn

          • sshUrl

            git@github.com:chenzx921020/faster-rcnn.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link