mean_average_precision | simple python/numpy utility | Machine Learning library

 by   MathGaron Python Version: Current License: MIT

kandi X-RAY | mean_average_precision Summary

kandi X-RAY | mean_average_precision Summary

mean_average_precision is a Python library typically used in Artificial Intelligence, Machine Learning, Numpy applications. mean_average_precision has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub.

Small and simple python/numpy utility to compute mean average precision (mAP) on detection task.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              mean_average_precision has a low active ecosystem.
              It has 104 star(s) with 44 fork(s). There are 4 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 4 open issues and 13 have been closed. On average issues are closed in 10 days. There are 3 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of mean_average_precision is current.

            kandi-Quality Quality

              mean_average_precision has 0 bugs and 0 code smells.

            kandi-Security Security

              mean_average_precision has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              mean_average_precision code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              mean_average_precision is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              mean_average_precision releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              mean_average_precision saves you 132 person hours of effort in developing the same functionality from scratch.
              It has 331 lines of code, 27 functions and 9 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed mean_average_precision and discovered the below as its top functions. This is intended to give you an instant insight into mean_average_precision implemented functionality, and help decide if they suit your requirements.
            • Plot a confusion matrix .
            • Evaluate the model .
            • Plot the mean average precision .
            • Compute the intersection of two boxes .
            • Intersect two boxes .
            • Return the precision of the feature .
            • String representation .
            Get all kandi verified functions for this library.

            mean_average_precision Key Features

            No Key Features are available at this moment for mean_average_precision.

            mean_average_precision Examples and Code Snippets

            Calculate the average precision at the top k .
            pythondot img1Lines of Code : 91dot img1License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            def _streaming_sparse_average_precision_at_top_k(labels,
                                                             predictions_idx,
                                                             weights=None,
                                                             metrics_collect  
            Calculate the average precision .
            pythondot img2Lines of Code : 82dot img2License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            def average_precision_at_k(labels,
                                       predictions,
                                       k,
                                       weights=None,
                                       metrics_collections=None,
                                       updates_collections=None  

            Community Discussions

            QUESTION

            I can't get a value with key in python dictionary
            Asked 2020-Dec-09 at 11:49

            I'm using 'Matchzoo' text retrieval library which is based on keras. I want to use the trained result, but it's dictionary and I can't get the value with shown keys.

            After training the model,

            ...

            ANSWER

            Answered 2020-Dec-09 at 11:49

            As you said, it could be because they are not strings. If you are sure that's the order they are coming in, try indexing through the dictionary using the keys as shown below..

            Source https://stackoverflow.com/questions/65214416

            QUESTION

            MAP@k computation
            Asked 2019-Mar-03 at 12:08

            Mean average precision computed at k (for top-k elements in the answer), according to wiki, ml metrics at kaggle, and this answer: Confusion about (Mean) Average Precision should be computed as mean of average precisions at k, where average precision at k is computed as:

            Where: P(i) is the precision at cut-off i in the list; rel(i) is an indicator function equaling 1 if the item at rank i is a relevant document, zero otherwise.

            The divider min(k, number of relevant documents) has the meaning of maximum possible number of relevant entries in the answer.

            Is this understanding correct?

            Is MAP@k always less than MAP computed for all ranked list?

            My concern is that, this is not how MAP@k is computed in many works.

            It is typical, that the divider is not min(k, number of relevant documents), but the number of relative documents in the top-k. This approach will give higher value of MAP@k.

            HashNet: Deep Learning to Hash by Continuation" (ICCV 2017)

            Code: https://github.com/thuml/HashNet/blob/master/pytorch/src/test.py#L42-L51

            ...

            ANSWER

            Answered 2019-Mar-03 at 12:08

            You are completely right and well done for finding this. Given the similarity of code, my guess is there is one source bug, and then papers after papers copied the bad implementation without examining it closely.

            The "akturtle" issue raiser is completely right too, I was going to give the same example. I'm not sure if "kunhe" understood the argument, of course recall matters when computing average precision.

            Yes, the bug should inflate the numbers. I just hope that the ranking lists are long enough and that the methods are reasonable enough such that they achieve 100% recall in the ranked list, in which case the bug would not affect the results.

            Unfortunately it's hard for reviewers to catch this as typically one doesn't review code of papers.. It's worth contacting authors to try to make them update the code, update their papers with correct numbers, or at least don't continue making the mistake in their future works. If you are planning to write a paper comparing different methods, you could point out the problem and report the correct numbers (as well as potentially the ones with the bug just to make apples for apples comparisons).

            To answer your side-question:

            Is MAP@k always less than MAP computed for all ranked list?

            Not necessarily, MAP@k is essentially computing the MAP while normalizing for the potential case where you can't do any better given just k retrievals. E.g. consider returned ranked list with relevances: 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 and assume there are in total 6 relevant documents. MAP should be slightly higher than 50% here, while MAP@3 = 100% because you can't do any better than retrieving 1 1 1. But this is unrelated to the bug you discovered as with their bug the MAP@k is guaranteed to be at least as large as the true MAP@k.

            Source https://stackoverflow.com/questions/54966320

            QUESTION

            Mean Average Precision for Multi-Label Multi-Class Data
            Asked 2018-Jan-04 at 17:14

            I am trying to write a code for computing the Mean Average Precision (MAP) for multi-label data. To give a more intuitive understanding kindly please look below

            I have written the code for the MAP computation in MATLAB but it is quite slow. Essentially it is slow due to the computation of the variable Lrx for each value of r.

            I wanted to make my code much faster.

            ...

            ANSWER

            Answered 2018-Jan-04 at 17:14

            With the delta function APx(i) = sum(Px.*deltax)/Lx you are throwing away some proportion of your r = 1:R iterations. Since the delta can be defined before the loop, why not only iterate through r where deltax(r) == 1.

            Source https://stackoverflow.com/questions/48003041

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install mean_average_precision

            You can download it from GitHub.
            You can use mean_average_precision like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            And of course any bugfixes/contribution are always welcome!.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/MathGaron/mean_average_precision.git

          • CLI

            gh repo clone MathGaron/mean_average_precision

          • sshUrl

            git@github.com:MathGaron/mean_average_precision.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link