Caffe-DeepBinaryCode | Supervised Semantics-preserving Deep Hashing | Machine Learning library

 by   kevinlin311tw C++ Version: Current License: Non-SPDX

kandi X-RAY | Caffe-DeepBinaryCode Summary

kandi X-RAY | Caffe-DeepBinaryCode Summary

Caffe-DeepBinaryCode is a C++ library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch applications. Caffe-DeepBinaryCode has no bugs, it has no vulnerabilities and it has low support. However Caffe-DeepBinaryCode has a Non-SPDX License. You can download it from GitHub.

This paper presents a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. We assume that the semantic labels are governed by several latent attributes with each attribute on or off, and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a point-wised manner, and thus is scalable to large-scale datasets. SSDH is simple and can be realized by a slight enhancement of an existing deep architecture for classification; yet it is effective and outperforms other hashing approaches on several benchmarks and large datasets. Compared with state-of-the-art approaches, SSDH achieves higher retrieval accuracy, while the classification performance is not sacrificed.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              Caffe-DeepBinaryCode has a low active ecosystem.
              It has 206 star(s) with 86 fork(s). There are 24 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 12 open issues and 22 have been closed. On average issues are closed in 80 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of Caffe-DeepBinaryCode is current.

            kandi-Quality Quality

              Caffe-DeepBinaryCode has 0 bugs and 0 code smells.

            kandi-Security Security

              Caffe-DeepBinaryCode has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              Caffe-DeepBinaryCode code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              Caffe-DeepBinaryCode has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              Caffe-DeepBinaryCode releases are not available. You will need to build from source code and install.
              Installation instructions, examples and code snippets are available.
              It has 4936 lines of code, 260 functions and 33 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Caffe-DeepBinaryCode
            Get all kandi verified functions for this library.

            Caffe-DeepBinaryCode Key Features

            No Key Features are available at this moment for Caffe-DeepBinaryCode.

            Caffe-DeepBinaryCode Examples and Code Snippets

            No Code Snippets are available at this moment for Caffe-DeepBinaryCode.

            Community Discussions

            Trending Discussions on Caffe-DeepBinaryCode

            QUESTION

            MAP@k computation
            Asked 2019-Mar-03 at 12:08

            Mean average precision computed at k (for top-k elements in the answer), according to wiki, ml metrics at kaggle, and this answer: Confusion about (Mean) Average Precision should be computed as mean of average precisions at k, where average precision at k is computed as:

            Where: P(i) is the precision at cut-off i in the list; rel(i) is an indicator function equaling 1 if the item at rank i is a relevant document, zero otherwise.

            The divider min(k, number of relevant documents) has the meaning of maximum possible number of relevant entries in the answer.

            Is this understanding correct?

            Is MAP@k always less than MAP computed for all ranked list?

            My concern is that, this is not how MAP@k is computed in many works.

            It is typical, that the divider is not min(k, number of relevant documents), but the number of relative documents in the top-k. This approach will give higher value of MAP@k.

            HashNet: Deep Learning to Hash by Continuation" (ICCV 2017)

            Code: https://github.com/thuml/HashNet/blob/master/pytorch/src/test.py#L42-L51

            ...

            ANSWER

            Answered 2019-Mar-03 at 12:08

            You are completely right and well done for finding this. Given the similarity of code, my guess is there is one source bug, and then papers after papers copied the bad implementation without examining it closely.

            The "akturtle" issue raiser is completely right too, I was going to give the same example. I'm not sure if "kunhe" understood the argument, of course recall matters when computing average precision.

            Yes, the bug should inflate the numbers. I just hope that the ranking lists are long enough and that the methods are reasonable enough such that they achieve 100% recall in the ranked list, in which case the bug would not affect the results.

            Unfortunately it's hard for reviewers to catch this as typically one doesn't review code of papers.. It's worth contacting authors to try to make them update the code, update their papers with correct numbers, or at least don't continue making the mistake in their future works. If you are planning to write a paper comparing different methods, you could point out the problem and report the correct numbers (as well as potentially the ones with the bug just to make apples for apples comparisons).

            To answer your side-question:

            Is MAP@k always less than MAP computed for all ranked list?

            Not necessarily, MAP@k is essentially computing the MAP while normalizing for the potential case where you can't do any better given just k retrievals. E.g. consider returned ranked list with relevances: 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 and assume there are in total 6 relevant documents. MAP should be slightly higher than 50% here, while MAP@3 = 100% because you can't do any better than retrieving 1 1 1. But this is unrelated to the bug you discovered as with their bug the MAP@k is guaranteed to be at least as large as the true MAP@k.

            Source https://stackoverflow.com/questions/54966320

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install Caffe-DeepBinaryCode

            Adjust Makefile.config and simply run the following commands:. For a faster build, compile in parallel by doing make all -j8 where 8 is the number of parallel threads for compilation (a good choice for the number of threads is the number of cores in your machine).

            Support

            Please feel free to leave suggestions or comments to Kevin Lin (kevinlin311.tw@iis.sinica.edu.tw), Huei-Fang Yang (hfyang@citi.sinica.edu.tw) or Chu-Song Chen (song@iis.sinica.edu.tw).
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/kevinlin311tw/Caffe-DeepBinaryCode.git

          • CLI

            gh repo clone kevinlin311tw/Caffe-DeepBinaryCode

          • sshUrl

            git@github.com:kevinlin311tw/Caffe-DeepBinaryCode.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link