homography | Homography Matrix Estimation using SVD | Math library

 by   towardsautonomy Python Version: Current License: No License

kandi X-RAY | homography Summary

kandi X-RAY | homography Summary

homography is a Python library typically used in Utilities, Math applications. homography has no bugs, it has no vulnerabilities and it has low support. However homography build file is not available. You can download it from GitHub.

This demonstrates how to implement homography matrix estimation given a set of source and destination points. It uses SVD method for solving a set of linear equations. Functions are implemented in homography.py and a test script is provided as test_homography.py.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              homography has a low active ecosystem.
              It has 3 star(s) with 0 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              homography has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of homography is current.

            kandi-Quality Quality

              homography has 0 bugs and 0 code smells.

            kandi-Security Security

              homography has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              homography code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              homography does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              homography releases are not available. You will need to build from source code and install.
              homography has no build file. You will be need to create the build yourself to build the component from source.
              It has 55 lines of code, 2 functions and 2 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed homography and discovered the below as its top functions. This is intended to give you an instant insight into homography implemented functionality, and help decide if they suit your requirements.
            • r Compute the homography between source and destination vectors .
            • r Apply homography matrix to cartesian coordinates .
            Get all kandi verified functions for this library.

            homography Key Features

            No Key Features are available at this moment for homography.

            homography Examples and Code Snippets

            No Code Snippets are available at this moment for homography.

            Community Discussions

            QUESTION

            Compute Homography Matrix from Intrinsic and Extrinsic Matrices
            Asked 2022-Mar-30 at 17:27

            I have Intrinsic (K) and Extrinsic ([R|t]) matrices from camera calibration. How do I compute homography matrix (H)?

            I have tried H = K [R|t] with z-component of R matrix = 0 and updating H matrix such that the destination image points lie completely within the frame but it didn't give desired H. Actually I am trying to stitch multiple images using Homography given intinsic and extrinsic matrices. When done with feature matching and then Homography computation the result is completely fine but I need to calculate matrix H from K and [R|t] matrices.

            ...

            ANSWER

            Answered 2022-Mar-30 at 17:27

            There seems to be some confusion. If you are using a homography to map images onto each other, then you are assuming the camera motion between them is a pure rotation.

            If this rotation is given, e.g. as a rotation matrix R, then the homography is simply H = K * R * inv(K). If it isn't, you must estimate it from the images. The simplest case is probably pan-tilt motion (think camera on a tripod). For this case you only need one point match between each image pair.

            EDIT: Refinement of initial Homography Solution.

            You should also look into bundle adjustment - e.g. using the excellent Ceres solver. A good (if a bit dated) introduction to B.A. is https://lear.inrialpes.fr/pubs/2000/TMHF00/Triggs-va99.pdf .

            For image stitching, the basic idea is to introduce for each matched image point pair (you may need tens/hundreds, well spread across the image area) an auxiliary 3D point lying on a plane, i.e. with one coordinate equal to zero. You then jointly optimize the camera parameters (intrinsic and extrinsic) and 3D point locations to minimize the reprojection error of the 3D points into the image points you have matched. Once you have a solution, you have choices to make:

            1. If the scene is far away from the camera or the camera translation between images can be ignored, you can "convert" the camera rotations between images into homographies, and use them to stitch.
            2. If the camera translations cannot be ignored, things quickly become a lot more complicated. If there are significant visible occlusions (regions of the scene in visible in one image but not in others), then no pure-stitching method can resolve them in general without the use of a 3D model of the scene. You can sometime use approximations, for example subdividing the scene into approximately planar patches.

            Source https://stackoverflow.com/questions/71613318

            QUESTION

            How to align Foto using OpenCV Python
            Asked 2022-Mar-29 at 04:28

            Need Help, I am trying to align 2 id cards using OpenCV. If I do it for 2 id cards from the same person then the result works like the picture below

            Before Alignment :

            The alignment work perfectly if i tried with the same person id as below picture :

            But if I do for 2 id cards that come from two different person then the result is messy, need help on how to do the alignment in this case

            ...

            ANSWER

            Answered 2022-Mar-29 at 04:28

            Two id cards with different persons may not be working well because in such a case, the two images will be similar but not exactly same, for eg: name will be different and photo would be different etc., hence the key points and descriptors would be different for both the images & your output is getting affected.

            You can detect the outer edge of the id card by using edge detection and selecting the largest contour and then use perspective transform to get a top down view if that's what you are aiming for.

            Source https://stackoverflow.com/questions/71649479

            QUESTION

            Structured-light 3D scanner - depth map from pixel correspondence
            Asked 2022-Feb-24 at 12:17

            I try to create Structured-light 3D scanner.

            Camera calibration

            Camera calibration is copy of OpenCV official tutorial. As resutlt I have camera intrinsic parameters(camera matrix).

            Projector calibration

            Projector calibration maybe is not correct but process was: Projector show chessboard pattern and camera take some photos from different angles. Images are cv.undistored with camera parameters and then result images are used for calibration with OpenCV official tutorial. As result I have projector intrinsic parameters(projector matrix).

            Rotation and Transition

            From cv.calibrate I have rotarion and transition vectors as results but vectors count are equal to images count and I thing it is not corect ones because I move camera and projector in calibration. My new idea is to project chessboard on scanning background, perform calibration and in this way I will have Rotation vector and Transition vector. I don't know is that correct way.

            Scanning

            Process of scanning is:

            Generate patterns -> undistor patterns with projector matrix -> Project pattern and take photos with camera -> undistort taken photos with camera matrix

            Camera-projector pixels map

            I use GrayCode pattern and with cv.graycode.getProjPixel and have pixels mapping between camera and projector. My projector is not very high resolution and last patterns are not very readable. I will create custom function that generate mapping without the last patterns.

            Problem

            I don't know how to get depth map(Z) from all this information. My confution is because there are 3 coordinate systems - camera, projector and world coordinate system.

            How to find 'Z' with code? Can I just get Z from pixels mapping between image and pattern?

            Information that have:

            • p(x,y,1) = R*q(x,y,z) + T - where p is image point, q is real world point(maybe), R and T are rotation vector and transition vector. How to find R and T?
            • Z = B.f/(x-x') - where Z is coordinate(depth), B-baseline(distanse between camera and projector) I can measure it by hand but maybe this is not the way, (x-x') - distance between camera pixel and projector pixel. I don't know how to get baseline. Maybe this is Transition vector?
            • I tried to get 4 meaning point, use them in cv.getPerspectiveTransform and this result to be used in cv.reprojectImageTo3D. But cv.getPerspectiveTransform return 3x3 matrix and cv.reprojectImageTo3D use Q-4×4 perspective transformation matrix that can be obtained with stereoRectify.

            Similar Questions:

            There are many other resources and I will update list with comment. I missed something and I can't figure out how to implement it.

            ...

            ANSWER

            Answered 2022-Feb-24 at 12:17

            Lets assume p(x,y) is the image point and the disparity as (x-x'). You can obtain the depth point as,

            Source https://stackoverflow.com/questions/71203311

            QUESTION

            How do I format the mask parameter for cv2.detail.BestOf2NearestMatcher apply2 function
            Asked 2022-Jan-28 at 22:52

            I am looking to speed up some bundle adjustment code by specifying which images are overlapping. I created a function that places a grid of points in each image and then tests the pairwise overlap. Basically, it approximates the intersection area of the photos from the initial homography. Here is an example these 4 overlapping photos:

            Ive been working off of this example https://github.com/amdecker/ir/blob/master/Stitcher.py

            ...

            ANSWER

            Answered 2022-Jan-28 at 22:52

            It looks like mask needs to be a NxN uint8 (boolean) matrix, N being the number of images, i.e. len(features).

            If docs are unclear/vague/lacking, feel free to open an issue on the github about it. This function seems a good candidate for improvement.

            Source https://stackoverflow.com/questions/70897908

            QUESTION

            The reason for the half integer coordinates
            Asked 2021-Dec-08 at 17:01

            I am reading some projective geometry image warping code from Google

            ...

            ANSWER

            Answered 2021-Dec-08 at 17:01

            Some people consider a pixel to be a point sample in a grid, some people consider them to be a 1x1 square.

            In that latter category, some people consider the 1x1 square to be centered on integer coordinates, such that one square ranges from 0.5 to 1.5, for example. Other people consider the square to range from 0.0 to 1.0, for example, and thus the pixel is centered on "half integer".

            In short, it is just a choice of coordinate system. It does not matter what coordinate system you use, as long as you use it consistently.

            Source https://stackoverflow.com/questions/70278780

            QUESTION

            question about python and opencv for merge images
            Asked 2021-Oct-09 at 12:03

            i wrote this code by python and opencv i have 2 images (first is an image from football match 36.jpg) :

            36.jpg

            and (second is pitch.png an image (Lines of football field (Red Color)) with png format = without white background) :

            pitch.png

            with this code , i selected 4 coordinates points in both of 2 images (4 corners of right penalty area) and then with ( cv2.warpPerspective ) and show it , we can show that first image from (Top View) as below:

            top view

            my question is this: what changes need in my code that (red color Lines of second image) show on first image same below images (that i draw in paint app):

            desired

            thanks in advance , for your help

            this is my code :

            ...

            ANSWER

            Answered 2021-Oct-09 at 12:03

            Swap your source and destination images and points. Then, warp the source image:

            Source https://stackoverflow.com/questions/69504812

            QUESTION

            Replace cv2.warpPerspective for big images
            Asked 2021-Sep-30 at 17:27

            I use python OpenCV to register images, and once I've found the homography matrix H, I use cv2.warpPerspective to compute final the transformation.

            However, it seems that cv2.warpPerspective is limited to short encoding for performance purposes, see here. I didn't some test, and indeed the limit of image dimension is 32767 pixels so 2^15, which makes sense with the explanation given in the other discussion.

            Is there an alternative to cv2.warpPerspective? I already have the homography matrix, I just need to do the transformation.

            ...

            ANSWER

            Answered 2021-Sep-30 at 17:27

            After looking at alternative libraries, there is a solution using skimage.

            If H is the homography matrix, the this OpenCV code:

            Source https://stackoverflow.com/questions/69367152

            QUESTION

            Can I balance an extremely bright picture in python? This picture is a result of thousands of pictures stitched together to form a panorama
            Asked 2021-Aug-19 at 20:02

            My aim is to stitch 1-2 thousand images together. I find the key points in all the images, then I find the matches between them. Next, I find the homography between the two images. I also take into account the current homography and all the previous homographies. Finally, I warp the images based on combined homography. (My code is written in python 2.7)

            The issue I am facing is that when I overlay the warped images, they become extremely bright. The reason is that most of the area between two consecutive images is common/overalapping. So, when I overlay them, the intensities of the common areas increase by a factor of 2 and as more and more images are overalid the moew bright the values become and eventually I get a matrix where all the pixels have the value of 255.

            Can I do something to adjust the brightness after every image I overlay?

            I am combining/overlaying the images via open cv function named cv.addWeighted()
            dst = cv.addWeighted( src1, alpha, src2, beta, gamma)

            here, I am taking alpha and beta = 1

            dst = cv.addWeighted( image1, 1, image2, 1, 0)

            I also tried decreasing the value of alpha and beta but here a problem comes that, when around 100 images have been overlaid, the first ones start to vanish probably because the intensity of those images became zero after being multiplied by 0.5 at every iteration. The function looked as follows. Here, I also set the gamma as 5:
            dst = cv.addWeighted( image1, 0.5, image2, 0.5, 5)

            Can someone please help how can I solve the problem of images getting extremely bright (when aplha = beta = 1) or images vanishing after a certain point (when alpha and beta are both around 0.5).

            This is the code where I am overlaying the images:

            ...

            ANSWER

            Answered 2021-Aug-10 at 21:46

            When you stitch two images, the pixel values of overlapping part do not just add up. Ideally, two matching pixels should have the same value (a spot in the first image should also has the same value in the second image), so you simply keep one value.

            In reality, two matching pixels may have slightly different pixel value, you may simply average them out. Better still, you adjust their exposure level to match each other before stitching.

            For many images to be stitched together, you will need to adjust all of their exposure level to match. To equalize their exposure level is a rather big topic, please read about "histogram equalization" if you are not familiar with it yet.

            Also, it is very possible that there is high contrast across that many images, so you may need to make your stitched image an HDR (high dynamic range) image, to prevent pixel value overflow/underflow.

            Source https://stackoverflow.com/questions/68730405

            QUESTION

            how make panoramic view of serie of pictures using python, opencv, orb descriptors and ransac from opencv
            Asked 2021-Aug-18 at 21:18

            i need to make a panoramic view from serie of pictures (3 pictures). After that i created orb, detected and computed keypoints and descriptors for the three picture, i matched the most likely similars keypoints between:

            1. image 1 and image 2

            2. image 2 and image 3

            Then i know to compute and find the panoramic view between only 2 images , and i do it between img1 and img2, and between img2 and img3. But Then for the last step , i want to find the panoramic view of theses 3 pictures using affine tranformation with Ransac algorithm from opencv. And i don't know to do that ( panoramic view for 3 pictures. So, I have to of course choose image 2 as the center of the panorama

            I didn't find a good explanation or an enough good explanation for me in order to compute and fine the panoramic view of theses 3 pictures. Someone can help me please to implement what i need please ?

            here is my code where i print panorama between img1 and img2, and between img2 and img3:

            ...

            ANSWER

            Answered 2021-Aug-18 at 21:18

            Instead of creating panoramas from image 1 & 2 and image 2 & 3 and then combining it, try doing it sequentially. So kind of like this:

            1. Take image 1 and 2, compute their matching faetures.
            2. Compute a Homography relating image 2 to image 1.
            3. Warp image 2 using this Homography to stitch it with image 1. This gives you an intermediate result 1.
            4. Take this intermediate result 1 as first image and repeat step 1-3 for next image in the sequence.

            A good blog post about the same to get started is here: https://kushalvyas.github.io/stitching.html

            To compare your algorithm performance, you can see the result from opencv-stitcher class:

            Source https://stackoverflow.com/questions/68836179

            QUESTION

            How to use homography to calculate the object's position on the plane
            Asked 2021-Aug-11 at 13:09

            I have to do the following work:

            ...

            ANSWER

            Answered 2021-Aug-11 at 13:09

            The assignment is a bit unclear. This is what I understand you need to do

            1. All frames below should be taken from a steady camera viewing a planar surface for some angle.
            2. Take one frame with the chessboard on a planar surface.
            3. Take one frame of the planar surface without chessboard (this is the background frame).
            4. Take a set of frames of a moving object on top of the plane (no chessboard).
            5. The result should be a XY plot of the place of the object relative to the given plane. The chessboard actually defines the (0,0) and scale of the plane.

            you will also need a template image of the chessboard. I suggest a chessboard of pattern size (9,6) from here.

            First, find the homography H from the camera to the template (assuming gray-level image):

            Source https://stackoverflow.com/questions/68657280

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install homography

            You can download it from GitHub.
            You can use homography like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/towardsautonomy/homography.git

          • CLI

            gh repo clone towardsautonomy/homography

          • sshUrl

            git@github.com:towardsautonomy/homography.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Math Libraries

            KaTeX

            by KaTeX

            mathjs

            by josdejong

            synapse

            by matrix-org

            gonum

            by gonum

            bignumber.js

            by MikeMcl

            Try Top Libraries by towardsautonomy

            VR3Dense

            by towardsautonomyJupyter Notebook

            DETR

            by towardsautonomyPython

            unity_mlagents_banana_navigator

            by towardsautonomyPython