I am working on a Panography / Panorama application in OpenCV, and I ran into a problem that I really can't understand. For an introduction to what a panoramic photo looks like, see here in the Wikipedia Panorama article: http://en.wikipedia.org/wiki/Panography
While I can take several images and stitch them together, creating any image, I like the reference image; here is a little tasting of what I mean.

However, as you see, he has a lot of problems. The primary one I come across is that the images are cropped (re: far right image, upper part of the images). To emphasize why this happens, I will draw the points that were matched and draw the lines for which the transformation will end:

If the left image is a reference image, and the correct image is the image after its translation (the original is below), I highlighted the green lines to highlight the image. The image has the following corner points:
TL: [234.759, -117.696] TR: [852.226, -38.9487] BR: [764.368, 374.84] BL: [176.381, 259.953]
Thus, the main problem is that after the perspective has been changed, the image:

Suffering from losses:

Now enough images, code.
I use cv::SurfFeatureDetector , cv::SurfDescriptorExtractor and cv::FlannBasedMatcher to get all of these points, and I calculate the matches and, more importantly, the good matches by doing the following:
for(int i = 0; i < descriptors_thisImage.rows; i++) { double dist = matches[i].distance; if(dist < min_dist) min_dist = dist; if(dist > max_dist) max_dist = dist; } for(int i = 0; i < descriptors_thisImage.rows; i++) { if(matches[i].distance < 3*min_dist) { good_matches.push_back(matches[i]); } }
This is pretty standard, and for that I followed the tutorial found here: http://opencv.itseez.com/trunk/doc/tutorials/features2d/feature_homography/feature_homography.html
To copy images on top of each other, I use the following method (where img1 and img2 are std::vector< cv::Point2f > )
for( int i = 0; i < good_matches.size(); i++ ) { img1.push_back( keypoints_thisImage[ good_matches[i].queryIdx ].pt ); img2.push_back( keypoints_referenceImage[ good_matches[i].trainIdx ].pt ); } cv::Mat H = cv::findHomography(cv::Mat(img1), cv::Mat(img2), CV_RANSAC); cv::warpPerspective(thisImage, thisTransformed, H, cv::Size(thisImage.cols * 2, thisImage.rows * 2), cv::INTER_CUBIC ); thisImage.copyTo(gsThisImage); for(int i = 0; i < gsThisImage.rows; i++) { cv::Vec3b *p = gsThisImage.ptr<cv::Vec3b>(i); for(int j = 0; j < gsThisImage.cols; j++) { for( int grb=0; grb < 3; grb++ ) { p[j][grb] = cv::saturate_cast<uchar>( 255.0f ); } } } cv::cvtColor(gsThisImage, gsThisImage, CV_BGR2GRAY); cv::warpPerspective(gsThisImage, thisMask, H, cv::Size(thisImage.cols * 2, thisImage.rows * 2), cv::INTER_CUBIC ); thisTransformed.copyTo(referenceImage, thisMask);
So, I have the coordinates of where the distorted image will end, I have the points that create the homogeneous matrix used for these transformations - but I cannot understand how I should translate these images so they cannot be cut. Any help or pointers are greatly appreciated!