maskrcnn-benchmark | modular reference implementation of Instance Segmentation | Computer Vision library
kandi X-RAY | maskrcnn-benchmark Summary
kandi X-RAY | maskrcnn-benchmark Summary
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Process a single image
- Decode the image
- The area of the bounding box
- Convert a tensor
- Convert CloudScapes instance
- Convert xyxy coordinates to xy coordinates
- Convert a polygon to box coordinates
- Make data loader
- Build a dataset from a list of datasets
- Train the detection model
- Evaluate a model on a dataset
- Returns a list of C ++ extensions
- Compute predictions for the given boxes
- Compute the classification
- Selects all boxes in the given boxes
- Transpose the image
- Resizes the bounding box
- Add the last layer
- Creates a 3x3 convolutional module
- Subsample the given proposals
- Forward convolution
- Forward feature extraction
- Given a list of image files and a list of instance ids return a dictionary of instances
- Compute features for a single feature map
- Match the predictions with the given predictions
- Forward convolution function
- Runs an inference on the given dataset
maskrcnn-benchmark Key Features
maskrcnn-benchmark Examples and Code Snippets
import torch
from torch.autograd import Function
from torch.autograd.function import once_differentiable
from torch.onnx.symbolic_opset9 import unsqueeze
from torch.onnx.symbolic_helper import parse_args
class NonMaxSuppression(Function):
@stati
import os
import torch
import tensorrt as trt
from PIL import Image
import numpy as np
import common
from tools.convert_model import conver_engine
import time
import cv2
import glob
TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
if __name__ == "__main__"
float* a = (float*)malloc(20 * 4 * sizeof(float));
cudaMemcpy(a, locData, 20 * 4 * sizeof(float), cudaMemcpyDeviceToHost);
for (int i = 0; i < 20; i ++) {
for (int j = 0; j < 4; j ++) {
std::c
Community Discussions
Trending Discussions on maskrcnn-benchmark
QUESTION
I am using the Faster R-CNN model available from https://github.com/facebookresearch/maskrcnn-benchmark. I am trying to evaluate the results of a trained model on the KITTI data set, after converting it to Coco Format (2D object detection).
The results are 0 or -1 and sometimes it throws an error in the CocoApi toolkit at g["area"].
pycoco if g['ignore'] or (g['area']aRng[1]): "KeyError: 'area'"
From what I found while researching the problem, "area" is used for segmentation and I do not have that kind of annotation in my data set.
An small example of how my converted annotation file looks:
...ANSWER
Answered 2020-Mar-22 at 04:35According to the 1. Detection Evaluation of the COCO official documents, AP by area
are also evaluated.
Therefore, if there is no area
in your own custom dataset, an error will occur in the following part of the code of site-packages/pycocotools/cocoeval.py
.
QUESTION
I am working on a repo that make use of the maskrcnn_benchmark repo. I have extensively, explored the bench-marking repo extensively for the cause of its slower performance on a cpu with respect to enter link description here.
In order to create a benchmark for the individual forward passes I have put a time counter for each part and it gives me the time required to calculate each component. I have had a tough time exactly pinpointing as to the slowest component of the entire architecture.I believe it to be BottleneckWithFixedBatchNorm class in the maskrcnn_benchmark/modeling/backbone/resnet.py
file.
I will really appreciate any help in localisation of the biggest bottle neck in this architecture.
...ANSWER
Answered 2019-Nov-24 at 19:27I have faced the same problem, the best possible solution for the same is to look inside the main code, go through the forward pass of each module and have a timer setup to log the time that is spent in the computations of each module. How we worked in it was to create an architecture where we create the time logger for each class, therefore every instance of the class will now be logging its time of execution, after through comparison, atleast in our case we have found that the reason for the delay was the depth of the Resnet module, (which given the computational cost of resnet is not a surprising factor at all, the only solution to the same is more palatalization so either ensure a bigger GPU for performing the task or reduce the depth of the Resnet network ).
I must inform that the maskrcnn_benchmark has been deprecated and an updated version of the same is available in the form of detectron2. Consider moving your code for significant speed improvements in the architecture.
BottleneckWithFixedBatchNorm is not the most expensive operation in the architecture and certainly not creating the bottleneck as all the operations instead of the name. The class isn't as computationally expensive and is computed in parallel even on a lower end CPU machine (at least in the inference stage).
An example of tracking better the performance of each module can be found with the code taken from the path : maskrcnn_benchmark/modeling/backbone/resnet.py
QUESTION
I have a Python project where I am using the maskrcnn_benchmark
project from facebook research.
In my continuous integration script, I create a virtual environment where I install this project with thee following steps:
...ANSWER
Answered 2019-Mar-13 at 16:46You can use dependency_links
setup.py
i.e.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install maskrcnn-benchmark
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page