mean_average_precision | simple python/numpy utility | Machine Learning library
kandi X-RAY | mean_average_precision Summary
kandi X-RAY | mean_average_precision Summary
Small and simple python/numpy utility to compute mean average precision (mAP) on detection task.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Plot a confusion matrix .
- Evaluate the model .
- Plot the mean average precision .
- Compute the intersection of two boxes .
- Intersect two boxes .
- Return the precision of the feature .
- String representation .
mean_average_precision Key Features
mean_average_precision Examples and Code Snippets
def _streaming_sparse_average_precision_at_top_k(labels,
predictions_idx,
weights=None,
metrics_collect
def average_precision_at_k(labels,
predictions,
k,
weights=None,
metrics_collections=None,
updates_collections=None
Community Discussions
Trending Discussions on mean_average_precision
QUESTION
I'm using 'Matchzoo' text retrieval library which is based on keras. I want to use the trained result, but it's dictionary and I can't get the value with shown keys.
After training the model,
...ANSWER
Answered 2020-Dec-09 at 11:49As you said, it could be because they are not strings. If you are sure that's the order they are coming in, try indexing through the dictionary using the keys as shown below..
QUESTION
Mean average precision computed at k (for top-k elements in the answer), according to wiki, ml metrics at kaggle, and this answer: Confusion about (Mean) Average Precision should be computed as mean of average precisions at k, where average precision at k is computed as:
Where: P(i) is the precision at cut-off i in the list; rel(i) is an indicator function equaling 1 if the item at rank i is a relevant document, zero otherwise.
The divider min(k, number of relevant documents)
has the meaning of maximum possible number of relevant entries in the answer.
Is this understanding correct?
Is MAP@k always less than MAP computed for all ranked list?
My concern is that, this is not how MAP@k is computed in many works.
It is typical, that the divider is not min(k, number of relevant documents)
, but the number of relative documents in the top-k. This approach will give higher value of MAP@k.
HashNet: Deep Learning to Hash by Continuation" (ICCV 2017)
Code: https://github.com/thuml/HashNet/blob/master/pytorch/src/test.py#L42-L51
...ANSWER
Answered 2019-Mar-03 at 12:08You are completely right and well done for finding this. Given the similarity of code, my guess is there is one source bug, and then papers after papers copied the bad implementation without examining it closely.
The "akturtle" issue raiser is completely right too, I was going to give the same example. I'm not sure if "kunhe" understood the argument, of course recall matters when computing average precision.
Yes, the bug should inflate the numbers. I just hope that the ranking lists are long enough and that the methods are reasonable enough such that they achieve 100% recall in the ranked list, in which case the bug would not affect the results.
Unfortunately it's hard for reviewers to catch this as typically one doesn't review code of papers.. It's worth contacting authors to try to make them update the code, update their papers with correct numbers, or at least don't continue making the mistake in their future works. If you are planning to write a paper comparing different methods, you could point out the problem and report the correct numbers (as well as potentially the ones with the bug just to make apples for apples comparisons).
To answer your side-question:
Is MAP@k always less than MAP computed for all ranked list?
Not necessarily, MAP@k is essentially computing the MAP while normalizing for the potential case where you can't do any better given just k retrievals. E.g. consider returned ranked list with relevances: 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 and assume there are in total 6 relevant documents. MAP should be slightly higher than 50% here, while MAP@3 = 100% because you can't do any better than retrieving 1 1 1. But this is unrelated to the bug you discovered as with their bug the MAP@k is guaranteed to be at least as large as the true MAP@k.
QUESTION
I am trying to write a code for computing the Mean Average Precision (MAP) for multi-label data. To give a more intuitive understanding kindly please look below
I have written the code for the MAP computation in MATLAB but it is quite slow. Essentially it is slow due to the computation of the variable Lrx for each value of r.
I wanted to make my code much faster.
...ANSWER
Answered 2018-Jan-04 at 17:14With the delta function APx(i) = sum(Px.*deltax)/Lx
you are throwing away some proportion of your r = 1:R
iterations. Since the delta can be defined before the loop, why not only iterate through r
where deltax(r) == 1
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mean_average_precision
You can use mean_average_precision like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page