Contrastive-Loss | contrastive loss for face recognition | Machine Learning library

 by   wujiyang C++ Version: Current License: MIT

kandi X-RAY | Contrastive-Loss Summary

kandi X-RAY | Contrastive-Loss Summary

Contrastive-Loss is a C++ library typically used in Artificial Intelligence, Machine Learning, Deep Learning applications. Contrastive-Loss has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Modified from wjgaas/DeepID2, update the source code to fit the latest verison of BVLC/caffe.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              Contrastive-Loss has a low active ecosystem.
              It has 12 star(s) with 5 fork(s). There are 1 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of Contrastive-Loss is current.

            kandi-Quality Quality

              Contrastive-Loss has no bugs reported.

            kandi-Security Security

              Contrastive-Loss has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              Contrastive-Loss is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              Contrastive-Loss releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Contrastive-Loss
            Get all kandi verified functions for this library.

            Contrastive-Loss Key Features

            No Key Features are available at this moment for Contrastive-Loss.

            Contrastive-Loss Examples and Code Snippets

            No Code Snippets are available at this moment for Contrastive-Loss.

            Community Discussions

            QUESTION

            Online tripet generation - am I doing it right?
            Asked 2018-Mar-21 at 11:41

            I'm trying to train a convolutional neural network with triplet loss (more about triplet loss here) in order to generate face embeddings (128 values that accurately describe a face).

            In order to select only semi-hard triplets (distance(anchor, positive) < distance(anchor, negative)), I first feed all values in a mini-batch and calculate the distances:

            ...

            ANSWER

            Answered 2017-Dec-04 at 03:19

            There are a lot of reasons why your network is performing poorly. From what I understand, your triplet generation method is fine. Here are some tips that may help improve your performance.

            The model

            In deep metric learning, people usually use some pre-trained models on ImageNet classification task as these models are pretty expressive and can generate good representation for image. You can fine-tuning your model on the basis of these pre-trained models, e.g., VGG16, GoogleNet, ResNet.

            How to fine-tuing

            Even if you have a good pre-trained model, it is often difficult to directly optimize the triplet loss using these model on your own dataset. Since these pre-trained models are trained on ImageNet, if your dataset is vastly different from ImageNet, you can first fine-tuning the model using classification task on your dataset. Once your model performs reasonably well on the classification task on your custom dataset, you can use the classification model as base network (maybe a little tweak) for triplet network. It will often lead to much better performance.

            Hyper parameters

            Hyper parameters such as learning rate, momentum, weight_decay etc. are also extremely important for good performance (learning rate maybe the most important factor). Since your are fine-tuning and not training the network from scratch. You should use a small learning rate, for example, lr=0.001 or lr=0.0001. For momentum, 0.9 is a good choice. For weight_decay, people usually use 0.0005 or 0.00005.

            If you add some fully connected layers, then for these layers, the learning rate may be higher than other layers (0.01 for example).

            Which layer to fine-tuing

            As your network has several layers, you need to decide which layer to fine-tune. Researcher have found that the lower layers in network just produce some generic features such as line or edges. Typically, people will freeze the updating of lower layers and only update the weight of upper layers which tend to produce task-oriented features. You should try to optimize starting from different lower layers and see which setting performs best.

            Reference
            1. Fast rcnn(Section 4.5, which layers to fine-tune)
            2. Deep image retrieval(section 5.2, Influence of fine-tuning the representation)

            Source https://stackoverflow.com/questions/45839488

            QUESTION

            Loss decreases when using semi hard triplets
            Asked 2018-Mar-16 at 06:25

            Here is a short review of triplet learning. I'm using three convolutional neural networks with shared weights in order to generate faces embeddings (anchor, positive, negative), with the loss described here.

            Triplet loss:

            ...

            ANSWER

            Answered 2018-Mar-16 at 05:38

            What do you get after taking tf.sqrt(d_pos) and tf.sqrt(d_neg) ?

            Source https://stackoverflow.com/questions/46089871

            QUESTION

            How to determine accuracy with triplet loss in a convolutional neural network
            Asked 2017-Dec-18 at 12:52

            A Triplet network (inspired by "Siamese network") is comprised of 3 instances of the same feed-forward network (with shared parameters). When fed with 3 samples, the network outputs 2 intermediate values - the L2 (Euclidean) distances between the embedded representation of two of its inputs from the representation of the third.

            I'm using pairs of three images for feeding the network (x = anchor image, a standard image, x+ = positive image, an image containing the same object as x - actually, x+ is same class as x, and x- = negative image, an image with different class than x.

            I'm using the triplet loss cost function described here.

            How do I determine the network's accuracy?

            ...

            ANSWER

            Answered 2017-Dec-04 at 02:33

            I am assuming that your are doing work for image retrieval or similar tasks.

            You should first generate some triplet, either randomly or using some hard (semi-hard) negative mining method. Then you split your triplet into train and validation set.

            If you do it this way, then you can define your validation accuracy as proportion of the number of triplet in which feature distance between anchor and positive is less than that between anchor and negative in your validation triplet. You can see an example here which is written in PyTorch.

            As another way, you can directly measure in term of your final testing metric. For example, for image retrieval, typically, we measure the performance of model on test set using mean average precision. If you use this metric, you should first define some queries on your validation set and their corresponding ground truth image.

            Either of the above two metric is fine. Choose whatever you think fit your case.

            Source https://stackoverflow.com/questions/45255030

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install Contrastive-Loss

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/wujiyang/Contrastive-Loss.git

          • CLI

            gh repo clone wujiyang/Contrastive-Loss

          • sshUrl

            git@github.com:wujiyang/Contrastive-Loss.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link