OpenCV findHomography Issue

I am working on a Panography / Panorama application in OpenCV, and I ran into a problem that I really can't understand. For an introduction to what a panoramic photo looks like, see here in the Wikipedia Panorama article: http://en.wikipedia.org/wiki/Panography

While I can take several images and stitch them together, creating any image, I like the reference image; here is a little tasting of what I mean.

An example Panography image I've created

However, as you see, he has a lot of problems. The primary one I come across is that the images are cropped (re: far right image, upper part of the images). To emphasize why this happens, I will draw the points that were matched and draw the lines for which the transformation will end:

The image matches

If the left image is a reference image, and the correct image is the image after its translation (the original is below), I highlighted the green lines to highlight the image. The image has the following corner points:

TL: [234.759, -117.696] TR: [852.226, -38.9487] BR: [764.368, 374.84] BL: [176.381, 259.953] 

Thus, the main problem is that after the perspective has been changed, the image:

Original image

Suffering from losses:

Cut up image

Now enough images, code.

I use cv::SurfFeatureDetector , cv::SurfDescriptorExtractor and cv::FlannBasedMatcher to get all of these points, and I calculate the matches and, more importantly, the good matches by doing the following:

 /* calculate the matches */ for(int i = 0; i < descriptors_thisImage.rows; i++) { double dist = matches[i].distance; if(dist < min_dist) min_dist = dist; if(dist > max_dist) max_dist = dist; } /* calculate the good matches */ for(int i = 0; i < descriptors_thisImage.rows; i++) { if(matches[i].distance < 3*min_dist) { good_matches.push_back(matches[i]); } } 

This is pretty standard, and for that I followed the tutorial found here: http://opencv.itseez.com/trunk/doc/tutorials/features2d/feature_homography/feature_homography.html

To copy images on top of each other, I use the following method (where img1 and img2 are std::vector< cv::Point2f > )

 /* set the keypoints from the good matches */ for( int i = 0; i < good_matches.size(); i++ ) { img1.push_back( keypoints_thisImage[ good_matches[i].queryIdx ].pt ); img2.push_back( keypoints_referenceImage[ good_matches[i].trainIdx ].pt ); } /* calculate the homography */ cv::Mat H = cv::findHomography(cv::Mat(img1), cv::Mat(img2), CV_RANSAC); /* warp the image */ cv::warpPerspective(thisImage, thisTransformed, H, cv::Size(thisImage.cols * 2, thisImage.rows * 2), cv::INTER_CUBIC ); /* place the contents of thisImage in gsThisImage */ thisImage.copyTo(gsThisImage); /* set the values of gsThisImage to 255 */ for(int i = 0; i < gsThisImage.rows; i++) { cv::Vec3b *p = gsThisImage.ptr<cv::Vec3b>(i); for(int j = 0; j < gsThisImage.cols; j++) { for( int grb=0; grb < 3; grb++ ) { p[j][grb] = cv::saturate_cast<uchar>( 255.0f ); } } } /* convert the colour to greyscale */ cv::cvtColor(gsThisImage, gsThisImage, CV_BGR2GRAY); /* warp the greyscale image to create an image mask */ cv::warpPerspective(gsThisImage, thisMask, H, cv::Size(thisImage.cols * 2, thisImage.rows * 2), cv::INTER_CUBIC ); /* stitch the transformed image to the reference image */ thisTransformed.copyTo(referenceImage, thisMask); 

So, I have the coordinates of where the distorted image will end, I have the points that create the homogeneous matrix used for these transformations - but I cannot understand how I should translate these images so they cannot be cut. Any help or pointers are greatly appreciated!

+7
source share
2 answers

First, why didn’t you use the recently added line module? He does exactly what you are trying to do.

Secondly, if you want to continue your code, fixing this is easy. In the homography matrix, the translations represent the values ​​in the last column.

 a11 a12 a13 t1 a21 a22 a23 t2 a31 a32 a33 t3 a41 a42 a43 1 

(If you have a 3x3 matrix, you will skip column a13..a43 and row a41..1. A33 will (should) become 1).

So, you need to figure out what you should put in the last column so that you align the images.

Check also this post, which explains (somehow the opposite problem) how to build homography when you know the camera settings. This will help you understand the role of matrix values.

Opencv with almost a camera rotating / aerial view

And note that everything I told you about the last column is approximate, because the values ​​in the last column are actually translated and some (secondary) factors.

+5
source

Once you find the matrices, you just need to calculate the transformations for the corners and collect the minmum and maximum x and y values ​​for the transformed points.

Once you have this bounding box, just translate all the matrices to (-xmin,-ymin) and select the image with (xmax-xmin) wide and (ymax-ymin) tall for the result, and then draw all the converted images into this.

With this approach, you will have black areas around the stitching, but no cropping.

Automatically finding instead the largest rectangle contained in the line (to get a complete merged image without black areas and minimal cropping) is pretty annoying to implement.

+1
source

All Articles