Matching images with different orientations and scales in MATLAB

I have two images that are similar but different in orientation and size. The following is an example:

enter image description hereenter image description here

Is there a way to map two images?

I used Procrustes form analysis, but are there any other ways?

+5
source share
4 answers

Give up the Example of finding and scaling an image using automatic matching in the toolbar of the Vision Vision system.

enter image description here

It shows how to determine points of interest, extract and map descriptor functions, and calculate the conversion between two images.

+6
source

Here's something to get you started. What you ask for is a classic problem known as image registration . Image registration aims to find the right homography that takes one image and aligns it with another. This includes finding interest or key points that are shared between two images, and determining which key points match between the two images. When you have these pairs of points, you define a homography matrix and deform one of the images so that they are aligned with the other with the specified matrix.

I am going to assume that you have computer boxes for image processing and image processing that are part of MATLAB. If you do not, then the answer that Maurits gave is a good alternative, and the VLFeat Toolbox is the one I also used.

First, let's read the images directly from StackOverflow:

im = imread('http://i.stack.imgur.com/vXqe8.png'); im2 = imread('http://i.stack.imgur.com/Pd7pt.png'); im_gray = rgb2gray(im); im2_gray = rgb2gray(im2); 

We also need to convert to shades of gray, because key-point detection algorithms require a grayscale image. Then we can use any feature detection algorithm that is part of MATLAB CVST .... I am going to use SURF since it is essentially the same as SIFT, but with some minor but key differences. You can use the detectSURFFeatures function, which is part of the CVST toolbar, and it accepts grayscale images. The output is a structure that contains a bunch of information about each point of the object that the algorithm has found for the image. Let it apply to both images (in shades of gray).

 points = detectSURFFeatures(im_gray); points2 = detectSURFFeatures(im2_gray); 

As soon as we discover functions, now time retrieves the descriptors that describe these key points. This can be done using extractFeatures . It takes a grayscale image and the corresponding structure, which was deduced from detectSURFFeatures . The result is a set of functions and valid key points after some subsequent processing.

 [features1, validPoints1] = extractFeatures(im_gray, points); [features2, validPoints2] = extractFeatures(im2_gray, points2); 

Now it's time to map the functions between the two images. This can be done using matchFeatures , and it uses functions between two images:

 indexPairs = matchFeatures(features1, features2); 

indexPairs is a 2D array in which the first column tells you which point of the object from the first image matches the image of the second image stored in the second column. We will use this to index our valid points to determine what actually matches.

 matchedPoints1 = validPoints1(indexPairs(:, 1), :); matchedPoints2 = validPoints2(indexPairs(:, 2), :); 

Then we can show which points match using showMatchedFeatures . We can put both images next to each other and draw lines between the corresponding key points to see what matches.

 figure; showMatchedFeatures(im, im2, matchedPoints1, matchedPoints2, 'montage'); 

This is what I get:

enter image description here

This is not ideal, but it certainly finds consistent matches between the two images.

Now we need to do the following: find the homography matrix and deform the images. I am going to use estimateGeometricTransform so that we can find a transformation that distorts one set of points to another. As Dima noted in his comments below, this reliable one defines the best homography matrix through RANSAC. We can call estimateGeometricTransform as follows:

 tform = estimateGeometricTransform(matchedPoints1.Location,... matchedPoints2.Location, 'projective'); 

The first input accepts a set of input points, which are the points that you must convert. The second input accepts a set of base points that are a link . These moments are what we want to correspond to.

In our case, we want to deform points from the first image - the person stands up and makes it coincide with the second image - the person leaning against it, and therefore the first entrance is the points from the first image, and the second entrance is the points of the second image.

For matching points, we want to refer to the Location field because they contain coordinates where the actual points coincide between the two images. We also use projective to take into account scale, shift, and rotation. Conclusion is a structure that contains our point transformation.

What we will do next, use imwarp in warp the first image so that it aligns with the second.

 out = imwarp(im, tform); 

out will contain our distorted image. If we displayed the second image and this output image side by side:

 figure; subplot(1,2,1); imshow(im2); subplot(1,2,2); imshow(out); 

This is what we get:

enter image description here

I would say not bad, right?


For your copy and paste of fun, here is what the complete code looks like:

 im = imread('http://i.stack.imgur.com/vXqe8.png'); im2 = imread('http://i.stack.imgur.com/Pd7pt.png'); im_gray = rgb2gray(im); im2_gray = rgb2gray(im2); points = detectSURFFeatures(im_gray); points2 = detectSURFFeatures(im2_gray); [features1, validPoints1] = extractFeatures(im_gray, points); [features2, validPoints2] = extractFeatures(im2_gray, points2); indexPairs = matchFeatures(features1, features2); matchedPoints1 = validPoints1(indexPairs(:, 1), :); matchedPoints2 = validPoints2(indexPairs(:, 2), :); figure; showMatchedFeatures(im, im2, matchedPoints1, matchedPoints2, 'montage'); tform = estimateGeometricTransform(matchedPoints1.Location,... matchedPoints2.Location, 'projective'); out = imwarp(im, tform); figure; subplot(1,2,1); imshow(im2); subplot(1,2,2); imshow(out); 

Besides

Keep in mind that I used the default options for everyone ... so detectSURFFeatures , matchFeatures , etc. You may need to play around with the options to get consistent results across the different pairs of images you are trying to use. I will leave it to you as an exercise. Take a look at all the links that I linked above regarding each of the functions so that you can play around with options that suit your tastes.


Good luck and good luck!

+17
source

You can get a reasonable result by following these steps:

  • SIFT point detection in both images. For example, with vlfeat lib.
  • Map SIFT Descriptors Using the Low Rule Implemented by vlfeat with vl_ubcmatch
  • Use RANSAC to find a subset of matching points that adhere to some homography H. An example of using Harris's points of view can be seen on the Peter Kovesi website , next to RANSACK itself and other useful features.

Original result:

enter image description here

+3
source

This is a completely different approach than the rest. Function-based registration methods are more reliable, but my approach may only be useful for your application, so I write it here.

  • Download the reference image (I will call it a model) and the image that you want to compare with the model
  • Calculate histograms of two images
  • Compare the two histograms. There are many ways to do this. Here I will use intersection and correlation.
  • Histogram Intersection: Calculate the histogram intersection and normalize it by dividing it by the number of pixels in the model histogram. This will give you a value from 0 to 1.

     image = imread('Pd7pt.png'); model = imread('vXqe8.png'); grImage = rgb2gray(image); grModel = rgb2gray(model); hImage = imhist(grImage); hModel = imhist(grModel); normhistInterMeasure = sum(min(hImage, hModel))/sum(hModel) corrMeasure = corr2(hImage, hModel) 
  • For intersection and correlation, I get 0.2492 and 0.9999 respectively.

0
source

Source: https://habr.com/ru/post/1215606/


All Articles