PerspectiveTransform | Calculate CATransform3D between two Perspectives | Math library
kandi X-RAY | PerspectiveTransform Summary
kandi X-RAY | PerspectiveTransform Summary
Calculate CATransform3D between two Perspectives
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of PerspectiveTransform
PerspectiveTransform Key Features
PerspectiveTransform Examples and Code Snippets
Community Discussions
Trending Discussions on PerspectiveTransform
QUESTION
when I write this code: (my entire code, school project on Augmented Reality) Everything worked perfectly until I tried to run the video. ...........................................................................................................................................................................................................
...ANSWER
Answered 2022-Feb-16 at 18:33Normally we ask for the full error message, with traceback. That makes it easier to identify where the error occurs. In this case though, set
is only used a couple of times.
QUESTION
I have found a perspective matrix using:
...ANSWER
Answered 2022-Feb-04 at 10:44Answering my own question based on the helpful comments received from @Dan Mašek.
According to the docs, src should be:
input two-channel or three-channel floating-point array.
In order for OpenCV to map the numpy array to cv::Mat (the C++ class that OpenCV uses), the channels (usually RGB components, but in this case coordinates) need to be the 3rd dimension/axis (first dimension are rows, second columns).
Either create the properly shaped array, by adding another level of nesting to the initial list: np.single([[[0, 0]]])
, or reshape the existing array: np.single([[0, 0]]).reshape(-1,1,2)
. This results in as many rows as necessary (one in this case), one column per row, and two channels per column.
TLDR:
I just required an extra layer of nesting, for example:
QUESTION
I have two shapes or coordinate systems, and I want to be able to transform points from one system onto the other.
I have found that if the shapes are quadrilateral and I have 4 pairs of corresponding points then I can calculate a transformation matrix and then use that matrix to calculate any point in Shape B
onto it's corresponding coordinates in Shape A
.
Here is the working python code to make this calculation:
...ANSWER
Answered 2022-Jan-21 at 13:04If the two shapes are related by a perspective transformation, then any four points will lead to the same transformation, at least as long as no the of them are collinear. In theory you might pick any four such points and the rest should just work.
In practice, numeric considerations might come into play. If you pick for points very close to one another, then small errors in their positions would lead to much larger errors further away from these points. You could probably do some sophisticated analysis involving error intervals, but as a rule of thumb I'd try to aim for large distances between any two points both on the input and on the output side of the transformation.
An answer from me on Math Exchange explains a bit of the kind of computation that goes into the definition of a perspective transformation given for pairs of points. It might be useful for understanding where that number 4 is coming from.
If you have more than 4 pairs of points, and defining the transformation using any four of them does not correctly translate the rest, then you are likely in one of two other use cases.
Either you are indeed looking for a perspective transformation, but have poor input data. You might have positions from feature detection, and the might be imprecise. Some features might even be matched up indirectly. So in this case you would be looking for the best transformation to describe your data with small errors. Your question doesn't sound like this is your use case, so I'll not go into detail.
Our you have a transformation that is not a perspective transformation. In particular anything that turns a straight line into a bent curve or vice versa is not a perspective transformation any more. You might be looking for some other class of transformation, or for something like a piecewise projective transformation. Without knowing more about your use case, it's very hard to suggest a good class of transformations for this.
QUESTION
I am working on a project that consists of my code recognizing an image of a sudoku puzzle and then solving it. I am working on the image recognition part right now. It was working fine until I realized that I had been making the whole program flipped on the y axis. So I had replaced
...ANSWER
Answered 2021-Nov-15 at 02:51It seems that the input corners are wrongly calculated. Within your perspectiveTransform
function, you have the following snippet that apparently calculates the four corners of the Sudoku puzzle:
QUESTION
I have two satellite images (02 bands), i want to align it based on delaunay transformation. the aim is getting an image RGB high-quality.
Note: this code success by CV2.PerspectiveTransform Function but i want another transformation for more accuracy.
...ANSWER
Answered 2021-Nov-02 at 08:32I solved this error by modifying the RESHAPE:
QUESTION
I'm trying to do relatively simple code where I extract contours of some areas in the image and draw 1 or multiple rectangles on them (normally with a "Object Detection model") (works fine). However, I then need to transform the coordinates of the rectangles drawn on the cropped areas back to the original image (and draw them over it to make sure the conversion went well) (which is not the current case).
The problem I'm having is probably related to the way I calculate the transformation matrix for the final cv2.getPerspectiveTransform
, but I can't find the right way to do it yet. I have tried with the coordinates of the original system (as in the example below) or from the coordinates of the boxes that were drawn, but none seem to give the expected result.
The example presented is a simplified case of drawing boxes since normally, the coordinates of these will be given by the AI model. Also, one cannot simply reuse cv2.warpPerspective
on the drawn images since the main interest is to have the final coordinates of the drawn boxes.
Starting image:
Result for the first extracted rectangle (good):
Result for the second extracted rectangle (good):
Result for the starting image with the rectangle drawn (wrong result):
...ANSWER
Answered 2021-Oct-11 at 23:08As suggested in the comments to the question, the solution was to just draw a polygon with 4 points instead of continuing to try to draw rectangles with 2 points.
I'm sharing the code for the final solution (along with some code related to the tests I did), in case someone else runs into a similar issue.
QUESTION
I have an array of coordinates that mark an area on the floor.
I want to generate a new array where all the coordinates are transformed, so that I get a warped array. The points should look like the following image. Please note that I want to generate the graphic using the new array. It does not exist yet. It gets generated after having to new array.
I know the distance between all coordinates if it helps. The coordinates json looks like this, where distance_to_next
contains the distance to the next point in cm:
ANSWER
Answered 2021-Sep-13 at 16:46The points in your coordinates json do not align with the white polygon. If I use them I get the green polygon as shown below:
QUESTION
i need to make a panoramic view from serie of pictures (3 pictures). After that i created orb, detected and computed keypoints and descriptors for the three picture, i matched the most likely similars keypoints between:
image 1 and image 2
image 2 and image 3
Then i know to compute and find the panoramic view between only 2 images , and i do it between img1 and img2, and between img2 and img3. But Then for the last step , i want to find the panoramic view of theses 3 pictures using affine tranformation with Ransac algorithm from opencv. And i don't know to do that ( panoramic view for 3 pictures. So, I have to of course choose image 2 as the center of the panorama
I didn't find a good explanation or an enough good explanation for me in order to compute and fine the panoramic view of theses 3 pictures. Someone can help me please to implement what i need please ?
here is my code where i print panorama between img1 and img2, and between img2 and img3:
...ANSWER
Answered 2021-Aug-18 at 21:18Instead of creating panoramas from image 1 & 2 and image 2 & 3 and then combining it, try doing it sequentially. So kind of like this:
- Take image 1 and 2, compute their matching faetures.
- Compute a Homography relating image 2 to image 1.
- Warp image 2 using this Homography to stitch it with image 1. This gives you an intermediate result 1.
- Take this intermediate result 1 as first image and repeat step 1-3 for next image in the sequence.
A good blog post about the same to get started is here: https://kushalvyas.github.io/stitching.html
To compare your algorithm performance, you can see the result from opencv-stitcher class:
QUESTION
I have already visited as many answers on here about the System.AccessViolationException: 'Attempted to read or write protected memory.
error and they all seem to be about something else that is not todo with pictures.
I am learning image processing but I am somewhat still learning the debugging of software. I am trying to search for an image inside another image using the featured based image detection with BRISK and the Brute force method. However for some reason every time I run and click the button2 I get the above error, I have no idea how to debug this. The exception is being thrown on the line matcher.KnnMatch(sceneDescriptor, matches, k);
the value of matches is null when I hover over it.
I have used NuGet in visual studio 2019 to install Emgu.CV vs 4.5.1, Emgu.cv.bitmap, emgu.cv.runtime.windows 4.5.1. I have even tried changing my compile mode from x86 to x64. I have no idea what I am doing wrong.
File - myImgprocessing.cs:
...ANSWER
Answered 2021-Jun-19 at 10:11I did not test my proposed solution, but am pretty confident that you need to initialize matches before passing it to the KnnMatch method:
QUESTION
I am trying to check if this image:
is contained inside images like this one:
I am using feature detection (SURF) and homography because template matching is not scale invariant. Sadly all the keypoints, except a few, are all in the wrong positions. Should I maybe trying template matching by scaling multiple times the image? If so, what would be the best approach to try and scale the image?
Code:
...ANSWER
Answered 2021-Jun-05 at 13:41If looking for specific colors is an option, you can rely on segmentation to find candidates quickly, regardless the size. But you'll have to add some post-filtering.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install PerspectiveTransform
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page