sift | sift - a file and folder sifter
kandi X-RAY | sift Summary
kandi X-RAY | sift Summary
sift - a file sifter. sift is a utility that allows you to automatically sift source folders for items (files or folders) and selectively move or hard link these items to subfolders in a destination folder.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of sift
sift Key Features
sift Examples and Code Snippets
def sift_down(self, idx, array):
while True:
l = self.get_left_child_idx(idx) # noqa: E741
r = self.get_right_child_idx(idx)
smallest = idx
if l < len(array) and array[l] < array[idx]:
def sift_up(self, idx):
p = self.get_parent_idx(idx)
while p >= 0 and self.heap[p] > self.heap[idx]:
self.heap[p], self.heap[idx] = self.heap[idx], self.heap[p]
self.idx_of_element[self.heap[p]], self.idx
Community Discussions
Trending Discussions on sift
QUESTION
I have an np.array(<>,dtype = object) of floats that are visually grouped by Nones between them. For example:
...ANSWER
Answered 2021-Jun-10 at 20:04Thanks to @hilberts_drinking_problem's comment, we can see a builtin solution using itertools is as follows:
QUESTION
I am trying to extract features of an image using SIFT in opencv 4.5.1, but when I try to check the result by using drawKeypoints() I keep getting this cryptic error:
...ANSWER
Answered 2021-May-31 at 20:34You are getting a exception because output
argument of drawKeypoints
must be 3 channels colored image, and you are initializing output
to 1 channel (grayscale) image.
When using: Mat output(source.rows, source.cols);
or Mat output;
, the drawKeypoints
function creates a new colored matrix automatically.
When using the derived template matrix class Mat_
, the function drawKeypoints
raises an exception!
You may replace: Mat_ output(source.rows, source.cols);
with:
QUESTION
I am using a SIFT keypoint extractor/descriptor to extract and plot keypoints on an image as shown in the code below:
...ANSWER
Answered 2021-May-30 at 18:43A simple solution may be: Iterating all keypoints and draw a "+" sign using cv2.drawMarker.
Here is a code sample:
QUESTION
I am sifting through a data set and hope to assign string values to integers. When this is done I get some values equalling 0 in my vector and I am not sure where they are coming from. I am taking out the observations with blank values for this one variable and hoping to plot the rest. Any thoughts?
...ANSWER
Answered 2021-May-21 at 22:00The error comes from MLS
, some values end with a question mark, "?"
.
QUESTION
I am using canny for image comparison. I am getting correct results for matching and non matching objects after comparing using canny image edging. At times it is not giving the right result and for that I need to keep changing the MIN_MATCH_COUNT. Any solution to keep the MIN_MATCH_COUNT and canny should compare each and every edge of the image.
...ANSWER
Answered 2021-May-20 at 07:43You could use a relative criteria, so instead of using an asbolute value for MIN_MATCH_COUNT you can use the percentage of matching keypoints over the total number of keypoints of your model. In this way you can set a threshold based on your specific test, let's say..30% (IDK, just an example). That's what I do in a similar issue. Something like:
QUESTION
I'm currently working on opencv vers. 4.5.1
and I want to use SIFT and SURF but I run into the well known problem that they're patented. I already know that under 4.5.1
there is the possibility to use the flags DOPENCV_ENABLE_NONFREE=ON
and DBUILD_opencv_xfeatures2d=ON
. But when I use the following command for cmake
ANSWER
Answered 2021-May-14 at 19:15Build OpenCV with the following command:
QUESTION
I'm using SIFT feature detector in OpenCV 4.5.2. By tuning the nOctaveLayers
parameter in cv::SIFT::create()
, I get these results from detectAndCompute()
:
To my understanding, there should be less computation with fewer octave layers, but why SIFT costs significantly more time with only 1 octave layer?
I also tested detect()
and compute()
separately, and they both cost more time when nOctaveLayers
is 1, which confuses me a lot.
The test image is here (from TUM open dataset). Thanks ahead for any help.
[Edit for @Micka] My test code:
...ANSWER
Answered 2021-May-13 at 12:13After hours of profiling, I finally found out the reason: GaussianBlur
.
The pipeline of SIFT algorithm is:
- Create initial image: convert data type of source image to
float
, double the resolution, and doGaussianBlur
(sigma=1.56) - Build gaussian pyramid
- Find key points: build DoG pyramid and find scale space extrema
- Calculate descriptors
The num of octaves is calculated according to image resolution (see here). And nOctaveLayers
controls num of layers (nOctaveLayers + 3
for GaussianPyramid) in each octave.
Indeed, when nOctaveLayers
increases, nums of layers and keypoints both increase. As a result, time cost of step 3 & 4 increases. However, in parallel computation, this time increment is not very remarkable (several milliseconds).
In contrast, the step 2 costs more than half of the total time. It costs 25.27 ms (in 43.49 ms) when nOctaveLayers
is 3, and 51.16 ms (in 63.10 ms) when nOctaveLayers
is 1. So, why is this happening?
Because the sigma for GaussianBlur()
increases faster when layers are fewer, and it's critical for the time consumed by GaussianBlur()
. See test below:
QUESTION
ANSWER
Answered 2021-Apr-28 at 00:52Feature detectors in OpenCV should be created using their static create
methods. It isn't explicit in the API reference to not use the constructors, but you can see in the docs for the SIFT
class that the create()
methods are the only ones shown (and they show that the corresponding Python call is SIFT_create()
).
The reason this is necessary is because the OpenCV Algorithm
class (which Feature2D
is a subclass of) is implemented using the PIMPL idiom so that SIFT
is actually a base/virtual class not intended to be instantiated directly; instead SIFT::create
returns a SIFT_Impl
instance, as you can see in the code. The Python bindings can probably be "fixed" to mitigate this issue at some point in the future, but until then, you'll want to use SIFT_create()
, BRISK_create()
, and so on for all the feature detectors.
Note also that SIFT.create()
and BRISK.create()
and so on are valid generated names which are equivalent, but you'll find the underscore versions more common in tutorials and in people's projects, and they are currently the ones explicitly documented.
QUESTION
Previously posted a related stackoverflow question about scraping a table on the leaderboard page of the PGA's website on this page. To summarize that post, the leaderboard table is difficult to scrape apparently because of the way this page uses javascript to render the page and table.
I can inspect and I see in the tag that there is an object global.leaderboardConfig
with useful info:
Is it possible to get this object as a list in R? I am able to grab all 76 script elements on the page using xml2::read_html('https://www.pgatour.com/leaderboard.html') %>% html_nodes('script')
, however I'm not sure how to identify the specific script tag needed, nor how to get an object out of it.
Edit: In the networks tab of devtools, there is also this request which provides the link for an API call that gets the data. Rather than fetching the object from the script tag, perhaps it is easier to grab all network requests and sift through those instead?
...ANSWER
Answered 2021-Apr-20 at 13:09This site generates the hmac
and expire
url parameters value from a JS function that is using a specific algorithm. The arguments of this algorithm are depending on the epoch time which is passed as url parameter to the JS file hosting that function here. This way, the hmac
value is different each time because it's processed from this file whose url is changing constantly.
This algorithm consists of bitwise and & xor like this (pseudocode):
QUESTION
Suppose I found keypoints and their descriptors in 2 images.
...ANSWER
Answered 2021-Apr-14 at 08:54I already did this in C++ but it is easily transferable to Python.
I am using files (matches.txt) which hold the feature point coordinates (x1, y1, x2, y2) such as trainIdx and queryIdx are similar:
x1 y1 x2 y2 (row 1 - trainIdx 0, queryIdx 0)
x1 y1 x2 y2 (row 2 - trainIdx 1, queryIdx 1)
...
In your example you would have to assing the correct trainIdx
and queryIdx
to the std::vector
.
Then I read the file and the images and fill the overloaded std::vector
with queryIdx
, trainIdx
and distance
(which is irrelevant for drawing, so I set it to 1).
Here is the relevant piece of code in C++:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install sift
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page