homography | python library for 2d homographies

 by   satellogic Python Version: 0.1.7 License: GPL-3.0

kandi X-RAY | homography Summary

kandi X-RAY | homography Summary

homography is a Python library. homography has no bugs, it has no vulnerabilities, it has build file available, it has a Strong Copyleft License and it has high support. You can install using 'pip install homography' or download it from GitHub, PyPI.

python library for 2d homographies
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              homography has a highly active ecosystem.
              It has 13 star(s) with 4 fork(s). There are 9 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 2 open issues and 1 have been closed. On average issues are closed in 12 days. There are no pull requests.
              It has a positive sentiment in the developer community.
              The latest version of homography is 0.1.7

            kandi-Quality Quality

              homography has no bugs reported.

            kandi-Security Security

              homography has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              homography is licensed under the GPL-3.0 License. This license is Strong Copyleft.
              Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.

            kandi-Reuse Reuse

              homography releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.

            Top functions reviewed by kandi - BETA

            kandi has reviewed homography and discovered the below as its top functions. This is intended to give you an instant insight into homography implemented functionality, and help decide if they suit your requirements.
            • Initialize the homography .
            • Create homography from source coordinates .
            • Returns the shift at the given point .
            • Calculate the distance between two points .
            • Adapt a numpy array .
            • Return a minor number from a full version string .
            Get all kandi verified functions for this library.

            homography Key Features

            No Key Features are available at this moment for homography.

            homography Examples and Code Snippets

            No Code Snippets are available at this moment for homography.

            Community Discussions

            QUESTION

            Is there a metric to quantify the perspectiveness in two images?
            Asked 2021-Jun-14 at 16:59

            I am coding a program in OpenCV where I want to adjust camera position. I would like to know if there is any metric in OpenCV to measure the amount of perspectiveness in two images. How can homography be used to quantify the degree of perspectiveness in two images as follows. The method that comes to my mind is to run edge detection and compare the parallel edge sizes but that method is prone to errors.

            ...

            ANSWER

            Answered 2021-Jun-14 at 16:59

            As a first solution I'd recommend maximizing the distance between the image of the line at infinity and the center of your picture.

            Identify at least two pairs of lines that are parallel in the original image. Intersect the lines of each pair and connect the resulting points. Best do all of this in homogeneous coordinates so you won't have to worry about lines being still parallel in the transformed version. Compute the distance between the center of the image and that line, possibly taking the resolution of the image into account somehow to make the result invariant to resampling. The result will be infinity for an image obtained from a pure affine transformation. So the larger that value the closer you are to the affine scenario.

            Source https://stackoverflow.com/questions/67963004

            QUESTION

            OpenCV Object detection with Feature Detection and Homography
            Asked 2021-Jun-07 at 22:09

            I am trying to check if this image:

            is contained inside images like this one:

            I am using feature detection (SURF) and homography because template matching is not scale invariant. Sadly all the keypoints, except a few, are all in the wrong positions. Should I maybe trying template matching by scaling multiple times the image? If so, what would be the best approach to try and scale the image?

            Code:

            ...

            ANSWER

            Answered 2021-Jun-05 at 13:41

            If looking for specific colors is an option, you can rely on segmentation to find candidates quickly, regardless the size. But you'll have to add some post-filtering.

            Source https://stackoverflow.com/questions/67849715

            QUESTION

            Differences between two images with slightly different point of view and lighting conditions with OpenCV
            Asked 2021-Jun-04 at 01:53

            With the method explained in CV - Extract differences between two images we can identify the differences between two aligned images.

            How to do this with OpenCV when the camera angle (point of view) and the lighting condition are slightly different?

            The code from How to match and align two images using SURF features (Python OpenCV )? helps to rotate / align the two images but as the result of the perspective transform ("homography") is not perfect, the "difference" algorithm will not work well here.

            As an example, how to get only the green sticker (= the difference) from these 2 photos?

            ...

            ANSWER

            Answered 2021-Jun-02 at 17:39

            The blue and green in those images are really close to each other color-wise ([80,95] vs [97, 101] on the Hue Channel). Unfortunately light-blue and green are right next to each other as colors. I tried it in both the HSV and LAB color spaces to see if I could get better separation in one vs the other.

            I aligned the images using feature matching as you mentioned. We can see that the perspective difference causes bits of the candy to poke out (the blue bits)

            I made a mask based on the pixel-wise difference in color between the two.

            There's a lot of bits sticking out because the images don't line up perfectly. To help deal with this we can also check a square region around each pixel to see if any of its nearby neighbors match its color. If it does, we'll remove it from the mask.

            We can use this to paint on the original image to mark what's different.

            Here's the results from the LAB version of the code

            I'll include both versions of the code here. They're interactive with 'WASD' to change the two parameters (color margin and fuzz margin). The color_margin represents how different two colors have to be to no longer be considered the same. The fuzz_margin is how far to look around the pixel for a matching color.

            lab_version.py

            Source https://stackoverflow.com/questions/67736244

            QUESTION

            How to compute the result after two perspective transformations?
            Asked 2021-May-21 at 23:24

            I am doing an image stitching project using OpenCV. Now I have the homography H1 between img1 and img2, and the homography H2 between img2 and img3. Now I need to compute the homography between img1 and img3, simply multiply H1*H2 is not working.

            Are any ideas to calculate the new homography between img1 and img3?

            ...

            ANSWER

            Answered 2021-May-21 at 19:42

            for me computing H1 * H2 works well and gives the right results. Here, H1 = H2_1 since it warps from image2 to image1. H2 = H3_2 since it warps from image3 to image2. H1 * H2 = H3_1 since it warps from image3 to image 1.

            Source https://stackoverflow.com/questions/67639340

            QUESTION

            OpenCV(4.1.0) error: (-215:Assertion failed) y0 - 6 * scale >= 0 && y0 + 6 * scale < Lx.rows
            Asked 2021-May-20 at 13:55

            I a following this tutorial on image alignment via openCV. There was no part with face detection, so I added it by myself.

            ...

            ANSWER

            Answered 2021-May-20 at 13:55

            MAX_FEATURES argument of AKAZE_create is not a valid argument.

            See AKAZE_create documentation:

            retval = cv.AKAZE_create( [, descriptor_type[, descriptor_size[, descriptor_channels[, threshold[, nOctaves[, nOctaveLayers[, diffusivity]]]]]]] )

            Replace akaze = cv2.AKAZE_create(MAX_FEATURES) with:

            Source https://stackoverflow.com/questions/67616696

            QUESTION

            Understanding OpenCV homography at a minimum of points
            Asked 2021-May-13 at 07:21

            I am quite intrigued by the idea of a homography and try to get it to work at a minimal example with python and OpenCV. Yet, my tests do not pass and I am not quite sure why. I pass in a set of corresponding points into the findHomography function according to This and then multiply the homography matrix to receive my new point.

            so the idea behind it is to find the planar coordinate transformation and then transform the points with

            X' = H@X

            where X' are the new coordinates and X are the coordinates in the new coordinate frame.

            Here is some minimal code example:

            ...

            ANSWER

            Answered 2021-May-13 at 07:21

            As Micka mentioned in the comments, the problem is the representation of the test points.

            Source https://stackoverflow.com/questions/67508515

            QUESTION

            Map an object from one image to another image using openCV and Python
            Asked 2021-Apr-26 at 19:02

            This is a problem concerning stereo calibration and rectification using openCV (vers. 4.5.1.48) and Python (vers. 3.8.5).

            I have two cameras placed on the same axis as shown on the image below:

            The left (upper) camera is taking pictures with 640x480 resolution, while the right (lower) camera is taking pictures with 320x240 resolution. The goal is to find an object on the right image (320x240) and crop out the same object on the left image (640x480). In other words; To transfer the rectangle that makes up the object in the right image, to the left image. This idea is sketched below.

            A red object is found on the right image and I need to transfer it's location to left image and crop it out. The objects is placed on a flat plane 30cm from the camera lenses. In other words; The distance (depth) from the two cameras lenses to the flat plane is constant (30cm).

            This main question is about how transfer a location from one image to another, when two cameras are placed side by side, when the images are of different resolutions and when the depth is (fairly) constant. It's not a question about finding objects.

            To solve this problem, as far as I know, stereo calibration must be used, and I have found the following articles/code, among other things:

            Below are an example of a calibration pattern that I used:

            I have 25 photos of the calibration pattern with the left and right camera. The pattern is 5x9 and the square sizes is 40x40 mm.

            Based on my knowledge, I have written the following code:

            ...

            ANSWER

            Answered 2021-Apr-26 at 19:02

            I solved this problem by using the following openCV functions:

            • cv2.findChessboardCorners()
            • cv2.cornerSubPix()
            • cv2.findHomography()
            • cv2.warpPerspective()

            I used the calibration plate at a distance of 30cm to calculate the perspective transformation matrix, H. Because of this, I can map an object from the right image to the left image. The depth has to be constant (30 cm) though, which is a bit problematic, but it is acceptable in my case.

            Thanks to @Micka for the great answers.

            Source https://stackoverflow.com/questions/67226475

            QUESTION

            How does perspective transformation work? [OpenCV python]
            Asked 2021-Apr-19 at 16:21

            Part of my dissertation involves rectifying a section of an image that is not in perfect frontal perspective. I can't add images because of my low score, but it's essentially this python code (with images).

            I've already built the program and it works, but I don't understand the math behind it. I've looked into matrix image rectification and homography, but there is a huge skill gap to the point I can't understand any of it. I'm not sure where to start learning the math. I know the basics of matrices and that's about it. I want to get to the point where I can code the matrix transformation function myself.

            In an nutshell, what do I need to "study" to understand the math?

            Thank you.

            ...

            ANSWER

            Answered 2021-Apr-19 at 15:34

            I'm sure it's impossible, but you can try. In fact, projective geometry is a deep mathematical concept that is studied in mathematical specialties of universities. If you really want to understand it well, you have to become a bit of a mathematician. Projective geometry is studied after linear algebra, analytic geometry and affine geometry, in this sequence. But I doubt you need it all. You can consider a simplified approach focused on image processing, such as here and here

            Source https://stackoverflow.com/questions/67162000

            QUESTION

            OpenCV, Python: Perspective warping problem in aerial image stitching
            Asked 2021-Mar-09 at 09:54

            Currently, I'm working on image stitching of aerial footage. I'm using the dataset, get from OrchardDataset. First of all, thanks to some great answers on stackoverflow, especially the answer from @alkasm (Here and Here). But I having an issue, as you can see below at Gap within the stitched image section.

            I used the H21, H31, H41, etc to wrap the images. The stitched image using H21 is excellent, but when wrap the img3 to current stitched image using H31, result shown terrible alignment between img3 and current stitched image. As the more images I wrap, the gap gets bigger and the images totally not well aligned.

            Does the brillant stackoverflow community have an ideas on how can I solve this problem?

            These are the steps I use to stitch the images:

            1. Extract the frame every second from the footage and undistort the image to get rid of fish-eye effect using the provided camera calibration matrix.
            2. Compute the SIFT feature descriptors. Set up macther using FLANN kd-tree and find matches between the images. Find the Homography (H21, H32, H43 and etc, where H21 refer to the homography which warps imag2 into coordinates of img1)
            3. Compose the homography with the previous homographies to get net homography using the method suggested in Here. (Compute H31, H41, H51, etc)
            4. Wrap the images using the answer provided in Here.

            Gap within the stitched image:

            I'm using the first 10 images get from OrchardDataSet.

            Stitched Image with Gaps

            Here's portion of my script:

            main.py

            ref_img is the first frame (img1). AdjHomoSet contain the images to be wraped (img2, img3, img4, etc). AccHomoSet contain the net homography (H31, H41, H51, etc)

            ...

            ANSWER

            Answered 2021-Mar-09 at 09:54

            Eventually I changed the way of warping the image using the approach provided by Jahaniam Real Time Video Mosaic. He locates the reference image at the middle of preset size of blank image and compute the subsequent homography and warp the adjacent images to the reference image.

            Example of stitched image

            Source https://stackoverflow.com/questions/66420366

            QUESTION

            When using a DLT algorithm to estimate a homography, would using more points result in more or less error?
            Asked 2021-Mar-01 at 06:55

            In page 116 of Multiple View Geometry, the graphs compare DLT error (solid circle), Gold Standard/reprojection error (dash square), theoretical error (dash diamond) methods of estimating a homography between figure a and the original square chess board. Figure b shows that when you use more point correspondences to do such an estimation, there is a higher residual error (see figure below). This doesn't make sense to me intuitively, shouldn't more point correspondences result in a better estimation?

            ...

            ANSWER

            Answered 2021-Mar-01 at 06:55

            The residual error is the sum of the residuals at each point, so of course it grows with the number of points. However, for an unbiased algorithm such as the Gold Standard one, and a given level of i.i.d. noise, the curve flattens because the contribution of each additional point to the sum counts less and less toward the total.

            Source https://stackoverflow.com/questions/66405035

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install homography

            You can install using 'pip install homography' or download it from GitHub, PyPI.
            You can use homography like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install homography

          • CLONE
          • HTTPS

            https://github.com/satellogic/homography.git

          • CLI

            gh repo clone satellogic/homography

          • sshUrl

            git@github.com:satellogic/homography.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link