ssd_keras | A Keras port of Single Shot MultiBox Detector | Machine Learning library
kandi X-RAY | ssd_keras Summary
kandi X-RAY | ssd_keras Summary
This is a Keras port of the SSD model architecture introduced by Wei Liu et al. in the paper SSD: Single Shot MultiBox Detector. Ports of the trained weights of all the original models are provided below. This implementation is accurate, meaning that both the ported weights and models trained from scratch produce the same mAP values as the respective models of the original Caffe implementation (see performance section below). The main goal of this project is to create an SSD implementation that is well documented for those who are interested in a low-level understanding of the model. The provided tutorials, documentation and detailed comments hopefully make it a bit easier to dig into the code and adapt or build upon the model than with most other implementations out there (Keras or otherwise) that provide little to no documentation and comments.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Predict to json
- Decode the predictions using the decoder
- Compute the intersection of two boxes
- Convert coordinates from a tensor
- Compute the intersection area between two boxes
- Apply inverse transforms
- Greedy threshold for predictions
- Returns the size of the dataset
- Generate images
- Compute the loss
- Smooth L1 loss
- Compute the log loss
- Decode detections
- Greedy NMS2
- Debugging function for decoder detection
- Implementation of greedy nms_debug
- Generate anchor boxes for a given layer
- Apply the aspect ratio of x
- Generate a sequence of nms that can be used to minimize the prediction
ssd_keras Key Features
ssd_keras Examples and Code Snippets
Community Discussions
Trending Discussions on ssd_keras
QUESTION
I can't understand SSD's default box implementation. Original paper's formula is below;
w_k=s_k√a_k
, h_k=s_k/√a_k
But many SSD's implementation seems to be different above's formula. For example, ssd.pytorch;
...ANSWER
Answered 2020-May-03 at 07:06I found the answer in github's issue
UPDATE:
min_sizes/img_size
and max_sizes/img_size
mean s_k
and s_k+1
respectively. Also, conv4_3
applies s_k=0.1
instead of equation(4). Therefore, all of feature maps can't apply equation(4). So I think all of scales are defined as min_sizes
and max_sizes
beforehand.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ssd_keras
Here are the ported weights for all the original trained models. The filenames correspond to their respective .caffemodel counterparts. The asterisks and footnotes refer to those in the README of the original Caffe implementation.
PASCAL VOC models: 07+12: SSD300*, SSD512* 07++12: SSD300*, SSD512* COCO[1]: SSD300*, SSD512* 07+12+COCO: SSD300*, SSD512* 07++12+COCO: SSD300*, SSD512*
COCO models: trainval35k: SSD300*, SSD512*
ILSVRC models: trainval1: SSD300*, SSD500
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page