homography | Homography Matrix Estimation using SVD | Math library
kandi X-RAY | homography Summary
kandi X-RAY | homography Summary
This demonstrates how to implement homography matrix estimation given a set of source and destination points. It uses SVD method for solving a set of linear equations. Functions are implemented in homography.py and a test script is provided as test_homography.py.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- r Compute the homography between source and destination vectors .
- r Apply homography matrix to cartesian coordinates .
homography Key Features
homography Examples and Code Snippets
Community Discussions
Trending Discussions on homography
QUESTION
I have Intrinsic (K) and Extrinsic ([R|t]) matrices from camera calibration. How do I compute homography matrix (H)?
I have tried H = K [R|t]
with z-component of R matrix = 0 and updating H matrix such that the destination image points lie completely within the frame but it didn't give desired H.
Actually I am trying to stitch multiple images using Homography given intinsic and extrinsic matrices. When done with feature matching and then Homography computation the result is completely fine but I need to calculate matrix H from K and [R|t] matrices.
ANSWER
Answered 2022-Mar-30 at 17:27There seems to be some confusion. If you are using a homography to map images onto each other, then you are assuming the camera motion between them is a pure rotation.
If this rotation is given, e.g. as a rotation matrix R, then the homography is simply H = K * R * inv(K). If it isn't, you must estimate it from the images. The simplest case is probably pan-tilt motion (think camera on a tripod). For this case you only need one point match between each image pair.
EDIT: Refinement of initial Homography Solution.
You should also look into bundle adjustment - e.g. using the excellent Ceres solver. A good (if a bit dated) introduction to B.A. is https://lear.inrialpes.fr/pubs/2000/TMHF00/Triggs-va99.pdf .
For image stitching, the basic idea is to introduce for each matched image point pair (you may need tens/hundreds, well spread across the image area) an auxiliary 3D point lying on a plane, i.e. with one coordinate equal to zero. You then jointly optimize the camera parameters (intrinsic and extrinsic) and 3D point locations to minimize the reprojection error of the 3D points into the image points you have matched. Once you have a solution, you have choices to make:
- If the scene is far away from the camera or the camera translation between images can be ignored, you can "convert" the camera rotations between images into homographies, and use them to stitch.
- If the camera translations cannot be ignored, things quickly become a lot more complicated. If there are significant visible occlusions (regions of the scene in visible in one image but not in others), then no pure-stitching method can resolve them in general without the use of a 3D model of the scene. You can sometime use approximations, for example subdividing the scene into approximately planar patches.
QUESTION
Need Help, I am trying to align 2 id cards using OpenCV. If I do it for 2 id cards from the same person then the result works like the picture below
The alignment work perfectly if i tried with the same person id as below picture :
But if I do for 2 id cards that come from two different person then the result is messy, need help on how to do the alignment in this case
...ANSWER
Answered 2022-Mar-29 at 04:28Two id cards with different persons may not be working well because in such a case, the two images will be similar but not exactly same, for eg: name will be different and photo would be different etc., hence the key points and descriptors would be different for both the images & your output is getting affected.
You can detect the outer edge of the id card by using edge detection and selecting the largest contour and then use perspective transform to get a top down view if that's what you are aiming for.
QUESTION
I try to create Structured-light 3D scanner.
Camera calibrationCamera calibration is copy of OpenCV official tutorial. As resutlt I have camera intrinsic parameters(camera matrix
).
Projector calibration maybe is not correct but process was: Projector show chessboard pattern and camera take some photos from different angles. Images are cv.undistored with camera parameters and then result images are used for calibration with OpenCV official tutorial. As result I have projector intrinsic parameters(projector matrix
).
From cv.calibrate
I have rotarion and transition vectors as results but vectors count are equal to images count and I thing it is not corect ones because I move camera and projector in calibration.
My new idea is to project chessboard on scanning background, perform calibration and in this way I will have Rotation vector
and Transition vector
. I don't know is that correct way.
Process of scanning is:
Generate patterns -> undistor patterns with projector matrix
-> Project pattern and take photos with camera -> undistort taken photos with camera matrix
I use GrayCode pattern and with cv.graycode.getProjPixel and have pixels mapping
between camera and projector. My projector is not very high resolution and last patterns are not very readable. I will create custom function that generate mapping without the last patterns.
I don't know how to get depth map
(Z
) from all this information. My confution is because there are 3 coordinate systems - camera, projector and world coordinate system.
How to find 'Z' with code? Can I just get Z
from pixels mapping
between image and pattern?
Information that have:
- p(x,y,1) = R*q(x,y,z) + T - where
p
is image point,q
is real world point(maybe),R
andT
are rotation vector and transition vector. How to findR
andT
? - Z = B.f/(x-x') - where
Z
is coordinate(depth),B
-baseline(distanse between camera and projector) I can measure it by hand but maybe this is not the way,(x-x')
- distance between camera pixel and projector pixel. I don't know how to get baseline. Maybe this isTransition vector
? - I tried to get 4 meaning point, use them in cv.getPerspectiveTransform and this result to be used in cv.reprojectImageTo3D. But
cv.getPerspectiveTransform
return 3x3 matrix andcv.reprojectImageTo3D
useQ-4×4 perspective transformation matrix that can be obtained with
stereoRectify.
Similar Questions:
- How is point cloud data acquired from the structured light 3D scanning? - Answer is
you need to define a vector that goes from the camera perspective center to the pixel in the image and then rotate this vector by the camera rotation
. But I don't know how to define/find thid vercor andRotation vector
is needed. - How to compute the rotation and translation between 2 cameras? - Question is about R and T between two cameras but almost everywhere writes that projector is inverse camera. One good answer is
The only thing you have to do is to make sure that the calibration chessboard is seen by both of the cameras.
But I think if I project chessboard pattern it will be additional distored by wall(Projective transormation)
There are many other resources and I will update list with comment. I missed something and I can't figure out how to implement it.
...ANSWER
Answered 2022-Feb-24 at 12:17Lets assume p(x,y) is the image point and the disparity as (x-x'). You can obtain the depth point as,
QUESTION
I am looking to speed up some bundle adjustment code by specifying which images are overlapping. I created a function that places a grid of points in each image and then tests the pairwise overlap. Basically, it approximates the intersection area of the photos from the initial homography. Here is an example these 4 overlapping photos:
Ive been working off of this example https://github.com/amdecker/ir/blob/master/Stitcher.py
...ANSWER
Answered 2022-Jan-28 at 22:52Implementation: https://github.com/opencv/opencv/blob/676a724491de423c5d964d83594f7e99aa0d31c7/modules/stitching/src/matchers.cpp#L338
It looks like mask
needs to be a NxN uint8 (boolean) matrix, N being the number of images, i.e. len(features)
.
If docs are unclear/vague/lacking, feel free to open an issue on the github about it. This function seems a good candidate for improvement.
QUESTION
I am reading some projective geometry image warping code from Google
...ANSWER
Answered 2021-Dec-08 at 17:01Some people consider a pixel to be a point sample in a grid, some people consider them to be a 1x1 square.
In that latter category, some people consider the 1x1 square to be centered on integer coordinates, such that one square ranges from 0.5 to 1.5, for example. Other people consider the square to range from 0.0 to 1.0, for example, and thus the pixel is centered on "half integer".
In short, it is just a choice of coordinate system. It does not matter what coordinate system you use, as long as you use it consistently.
QUESTION
i wrote this code by python and opencv i have 2 images (first is an image from football match 36.jpg) :
and (second is pitch.png an image (Lines of football field (Red Color)) with png format = without white background) :
with this code , i selected 4 coordinates points in both of 2 images (4 corners of right penalty area) and then with ( cv2.warpPerspective ) and show it , we can show that first image from (Top View) as below:
my question is this: what changes need in my code that (red color Lines of second image) show on first image same below images (that i draw in paint app):
thanks in advance , for your help
this is my code :
...ANSWER
Answered 2021-Oct-09 at 12:03Swap your source and destination images and points. Then, warp the source image:
QUESTION
I use python OpenCV to register images, and once I've found the homography matrix H
, I use cv2.warpPerspective
to compute final the transformation.
However, it seems that cv2.warpPerspective
is limited to short
encoding for performance purposes, see here. I didn't some test, and indeed the limit of image dimension is 32767 pixels so 2^15, which makes sense with the explanation given in the other discussion.
Is there an alternative to cv2.warpPerspective
? I already have the homography matrix, I just need to do the transformation.
ANSWER
Answered 2021-Sep-30 at 17:27After looking at alternative libraries, there is a solution using skimage
.
If H
is the homography matrix, the this OpenCV code:
QUESTION
My aim is to stitch 1-2 thousand images together. I find the key points in all the images, then I find the matches between them. Next, I find the homography between the two images. I also take into account the current homography and all the previous homographies. Finally, I warp the images based on combined homography. (My code is written in python 2.7)
The issue I am facing is that when I overlay the warped images, they become extremely bright. The reason is that most of the area between two consecutive images is common/overalapping. So, when I overlay them, the intensities of the common areas increase by a factor of 2 and as more and more images are overalid the moew bright the values become and eventually I get a matrix where all the pixels have the value of 255.
Can I do something to adjust the brightness after every image I overlay?
I am combining/overlaying the images via open cv function named cv.addWeighted()
dst = cv.addWeighted( src1, alpha, src2, beta, gamma)
here, I am taking alpha and beta = 1
dst = cv.addWeighted( image1, 1, image2, 1, 0)
I also tried decreasing the value of alpha and beta but here a problem comes that, when around 100 images have been overlaid, the first ones start to vanish probably because the intensity of those images became zero after being multiplied by 0.5 at every iteration. The function looked as follows. Here, I also set the gamma as 5:
dst = cv.addWeighted( image1, 0.5, image2, 0.5, 5)
Can someone please help how can I solve the problem of images getting extremely bright (when aplha = beta = 1) or images vanishing after a certain point (when alpha and beta are both around 0.5).
This is the code where I am overlaying the images:
...ANSWER
Answered 2021-Aug-10 at 21:46When you stitch two images, the pixel values of overlapping part do not just add up. Ideally, two matching pixels should have the same value (a spot in the first image should also has the same value in the second image), so you simply keep one value.
In reality, two matching pixels may have slightly different pixel value, you may simply average them out. Better still, you adjust their exposure level to match each other before stitching.
For many images to be stitched together, you will need to adjust all of their exposure level to match. To equalize their exposure level is a rather big topic, please read about "histogram equalization" if you are not familiar with it yet.
Also, it is very possible that there is high contrast across that many images, so you may need to make your stitched image an HDR (high dynamic range) image, to prevent pixel value overflow/underflow.
QUESTION
i need to make a panoramic view from serie of pictures (3 pictures). After that i created orb, detected and computed keypoints and descriptors for the three picture, i matched the most likely similars keypoints between:
image 1 and image 2
image 2 and image 3
Then i know to compute and find the panoramic view between only 2 images , and i do it between img1 and img2, and between img2 and img3. But Then for the last step , i want to find the panoramic view of theses 3 pictures using affine tranformation with Ransac algorithm from opencv. And i don't know to do that ( panoramic view for 3 pictures. So, I have to of course choose image 2 as the center of the panorama
I didn't find a good explanation or an enough good explanation for me in order to compute and fine the panoramic view of theses 3 pictures. Someone can help me please to implement what i need please ?
here is my code where i print panorama between img1 and img2, and between img2 and img3:
...ANSWER
Answered 2021-Aug-18 at 21:18Instead of creating panoramas from image 1 & 2 and image 2 & 3 and then combining it, try doing it sequentially. So kind of like this:
- Take image 1 and 2, compute their matching faetures.
- Compute a Homography relating image 2 to image 1.
- Warp image 2 using this Homography to stitch it with image 1. This gives you an intermediate result 1.
- Take this intermediate result 1 as first image and repeat step 1-3 for next image in the sequence.
A good blog post about the same to get started is here: https://kushalvyas.github.io/stitching.html
To compare your algorithm performance, you can see the result from opencv-stitcher class:
QUESTION
I have to do the following work:
...ANSWER
Answered 2021-Aug-11 at 13:09The assignment is a bit unclear. This is what I understand you need to do
- All frames below should be taken from a steady camera viewing a planar surface for some angle.
- Take one frame with the chessboard on a planar surface.
- Take one frame of the planar surface without chessboard (this is the background frame).
- Take a set of frames of a moving object on top of the plane (no chessboard).
- The result should be a XY plot of the place of the object relative to the given plane. The chessboard actually defines the (0,0) and scale of the plane.
you will also need a template image of the chessboard. I suggest a chessboard of pattern size (9,6) from here.
First, find the homography H
from the camera to the template (assuming gray-level image):
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install homography
You can use homography like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page