vlfeat | An open library of computer vision algorithms | Computer Vision library

 by   vlfeat C Version: v0.9.21 License: BSD-2-Clause

kandi X-RAY | vlfeat Summary

kandi X-RAY | vlfeat Summary

vlfeat is a C library typically used in Artificial Intelligence, Computer Vision applications. vlfeat has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

The VLFeat open source library implements popular computer vision algorithms specialising in image understanding and local featurexs extraction and matching. Algorithms incldue Fisher Vector, VLAD, SIFT, MSER, k-means, hierarchical k-means, agglomerative information bottleneck, SLIC superpixes, quick shift superpixels, large scale SVM training, and many others. It is written in C for efficiency and compatibility, with interfaces in MATLAB for ease of use, and detailed documentation throughout. It supports Windows, Mac OS X, and Linux. VLFeat is distributed under the BSD license (see the COPYING file).
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              vlfeat has a medium active ecosystem.
              It has 1503 star(s) with 620 fork(s). There are 156 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 115 open issues and 53 have been closed. On average issues are closed in 223 days. There are 29 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of vlfeat is v0.9.21

            kandi-Quality Quality

              vlfeat has no bugs reported.

            kandi-Security Security

              vlfeat has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              vlfeat is licensed under the BSD-2-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              vlfeat releases are not available. You will need to build from source code and install.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of vlfeat
            Get all kandi verified functions for this library.

            vlfeat Key Features

            No Key Features are available at this moment for vlfeat.

            vlfeat Examples and Code Snippets

            No Code Snippets are available at this moment for vlfeat.

            Community Discussions

            QUESTION

            Cyvlfeat.vlad - 'module' object is not callable
            Asked 2020-Mar-10 at 15:23

            I have installed cyvlfeat using conda install cyvlfeat ( from https://gsoc2016.wordpress.com/2016/08/19/cythonpython-wrapper-for-vlfeat-library-cyvlfeat-project-status/ ). The problem is that when I run the following code:

            ...

            ANSWER

            Answered 2020-Mar-10 at 15:23

            The code from the blog post you're trying to use was never merged into the main project: https://github.com/menpo/cyvlfeat/pull/25

            As you can see, currently cyvfleat.vlad is an empty module:

            https://github.com/menpo/cyvlfeat/tree/master/cyvlfeat/vlad

            Source https://stackoverflow.com/questions/60610400

            QUESTION

            Extracting VLAD from SIFT Descriptors in VLFeat with Matlab
            Asked 2020-Mar-08 at 19:20

            I have a folder of images. I want to compute VLAD features from each image.

            I loop over each image, load it, and obtain the SIFT descriptors as follows:

            ...

            ANSWER

            Answered 2017-May-10 at 07:13

            First, you need to obtain a dictionary of visual words, or to be more specific: cluster the SIFT features of all images using k-means clustering. In [1], a coarse clustering using e.g. 64, or 256 clusters is recommended.

            For that, we have to concatenate all descriptors into one matrix, which we can then pass to the vl_kmeans function. Further, we convert the descriptors from uint8 to single, as the vl_kmeans function requires the input to be either single or double.

            Source https://stackoverflow.com/questions/43883232

            QUESTION

            Improve Makefile
            Asked 2020-Mar-01 at 17:48

            I have the following Makefile:

            ...

            ANSWER

            Answered 2020-Feb-28 at 00:31

            IMO autotools is way overkill for this.

            You can do this quite easily. However there are some odd things about your makefile.

            Most importantly, all your files are suffixed with a .c extension which means they are C files, but you are compiling them with g++ which is a C++ compiler. This does not make sense. Either you should be naming your source files as C++ files, which means either .cc or .cxx or .cpp or similar, or you should be using a C compiler like gcc not g++.

            I will assume your code is C code based on the file extensions.

            Please use standard variables (CC, CPPFLAGS, CFLAGS, LDLIBS):

            Source https://stackoverflow.com/questions/60441416

            QUESTION

            How to Use Pre-Trained CNN models in Python
            Asked 2019-Nov-16 at 12:32

            So basically I am trying to use a pre-trained VGG CNN model. I have downloaded model from the following website:

            http://www.vlfeat.org/matconvnet/pretrained/

            which has given me a image-vgg-m-2048.mat file. But the guide gives me how to use it in Matlab using MatconvNet Library. I want to implement the same thing in python. For which I am using Keras.

            I have written the following code:

            ...

            ANSWER

            Answered 2019-Nov-16 at 12:32

            Basically if you specify any weights which is not imagenet, it will just use keras model.load_weights to load it and I guess image-vgg-m-2408.mat is not a valid one that keras can load directly here.

            https://github.com/keras-team/keras-applications/blob/master/keras_applications/vgg16.py#L197

            Source https://stackoverflow.com/questions/58890437

            QUESTION

            ImportError: cannot import name '_obtain_input_shape' from keras
            Asked 2019-Feb-12 at 08:41

            In Keras,

            I'm trying to import _obtain_input_shape as follows:

            ...

            ANSWER

            Answered 2018-Aug-01 at 08:56

            This issue occured because of the version of keras.

            In my case, I was downgrade keras 2.2.2 to 2.2.0, and the problem was solved.

            Source https://stackoverflow.com/questions/49113140

            QUESTION

            tensorflow 2.5x slower than pytorch on vgg16 architecture
            Asked 2018-Sep-07 at 14:53

            So I'm trying to get into tensorflow and liking it so far.

            Today I upgraded to cuda 8, cudnn 5.1 and tensorflow 0.12.1. Using a Maxwell Titan X GPU.

            Using the following short code of loading the pretrained vgg16:

            ...

            ANSWER

            Answered 2018-Sep-07 at 14:53

            Tested recently on cuda 9.0, tensorflow 1.9 and pytorch 0.4.1, the differences are now negligible for the same operations.

            See the proper timing here.

            Source https://stackoverflow.com/questions/41832779

            QUESTION

            How to visualize fhog (not HOG)
            Asked 2018-Jun-25 at 02:26

            My MATLAB code uses fhog (instead of Hog) to extract features. However, I want to visualize the HOG features used on the image patch. I know extractHOGFeatures or VLFeat is used if we use HOG available in MATLAB. But how do I visualize fhog?

            Since Piotr's Image & Video Toolbox (which has fhog) is widely used in MATLAB now and I frequently need it, it would be great if someone can tell me how to visualize fhog extracted features.

            The code of fhog can be found at here:

            The code snippet is as follows:

            ...

            ANSWER

            Answered 2018-Jun-25 at 02:26

            I was able to make this work. It was a stupid thing I was ignoring.

            Source https://stackoverflow.com/questions/50731897

            QUESTION

            How to find the matched SIFT features that are spatially consistent?
            Asked 2018-Mar-26 at 22:05

            I have extracted DenseSIFT from the query and database image and quantized by kmeans using VLFeat. The challenge is to find those SIFT features that quantized to the same visual words and be spatially consistent (have a similar position to object centers). I have tried few techniques:

            1. using FLANN() on the SIFT (normal SIFT) coordinates on both query and database image and find the nearest neighbor and then comparing the visual words (NOTE: this gave few points that did not work).
            2. Using Coherent-Point-Drift (CPD) on SIFT coordinates to find the matched points (I am not sure about this whether it is a right solution or not).

            I am struggling with it for many days, and I hope experts can guide me with this. What are the possible solutions or algorithms that I can use for solving this?

            ...

            ANSWER

            Answered 2018-Mar-26 at 22:05

            Neither of those two methods you mentioned achieve what you want do. The answer depends on the object in your pictures. If it has mostly flat faces, then you can rely on estimating the homography, see this tutorial.

            If that's not case then can use the epipolar constraint to remove outliers / get geometrically consistent matches, see this tutorial. There are some other ways to achieve this if the speed is of importance in your application.

            Source https://stackoverflow.com/questions/49476357

            QUESTION

            Initialize TensorFlow CNN model with Numpy weight matrices
            Asked 2018-Feb-20 at 03:10

            I am working on manually converting a pretrained matconvnet model to a tensorflow model. I have pulled the weights/biases from the matconvnet model mat file using scipy.io and obtained numpy matrices for the weights and biases.

            Code snippets where data is a dictionary returned from scipy.io:

            ...

            ANSWER

            Answered 2018-Feb-20 at 03:10

            I got over this hurdle with tf.reshape(...) (instead of calling weights['wc1'].reshape(...) ). I am still not certain about the performance yet, or if this is a horribly naive endeavor.

            UPDATE Further testing, this approach appears to be possible at least functionally (as in I have created a TensorFlow CNN model that will run and produce predictions that appear consistent with MatConvNet model. I make no claims on accuracies between the two).

            I am sharing my code. In my case, it was a very small network - and if you are attempting to use this code for your own matconvnet to tensorflow project, you will likely need much more modifications: https://github.com/melissadale/MatConv2TensorFlow

            Source https://stackoverflow.com/questions/48757194

            QUESTION

            Feature matching difficulty
            Asked 2017-Jun-17 at 15:19

            Im working on a 3d reconstruction project where i have trouble matching the features in order to proceed with the reconstruction. To be more specific when im matching feature of matlab's examples images i have a high correct to wrong matches ratio but when im matching features of my own photos taken by a phone camera i have almost only wrong matches. I 've tried tuning the threshold but the problem still remains. Any ideas/sugestions of what is going wrong?

            The descriptor im using is the sift descriptor from the vlfeat toolbox

            edit: here is a dropbox link with the original images, the detected salient/corner points and the matches.

            ...

            ANSWER

            Answered 2017-Jun-17 at 15:19

            I think your main problems here are significant difference in lighting between the images, and specular reflections off the plastic casing. You are also looking at the inside of the USB drive through the transparent plastic, which doesn't help.

            What feature detectors/descriptors have you tried? I would start with SURF, and then I would try MSER. It is also possible to use multiple detectors and descriptors, but you should be careful to keep them separate. Of course, there are also lots of parameters for you to tune.

            Another thing that may be helpful is to take higher-resolution images.

            If you are trying to do 3D reconstruction, can you assume that the camera does not move much between the images? In that case, try using vision.PointTracker to track points from one frame into the other instead of matching them.

            Source https://stackoverflow.com/questions/42569967

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install vlfeat

            To start using VLFeat as a MATLAB toolbox, download the latest VLFeat binary package. Note that the pre-compiled binaries require MATLAB 2009B and later. Unpack it, for example by using WinZIP (Windows), by double clicking on the archive (Mac), or by using the command line (Linux and Mac):.

            Support

            The toolbox should be laregly compatible with GNU Octave, an open source MATLAB equivalent. However, the binary distribution does not ship with pre-built GNU Octave MEX files. To compile them use.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/vlfeat/vlfeat.git

          • CLI

            gh repo clone vlfeat/vlfeat

          • sshUrl

            git@github.com:vlfeat/vlfeat.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link