Try using cv2.DescriptorMatcher_create for this.
For example, in the following code I use pylab, but you can get a message;)
It computes key points using GFTT, and then uses the SURF descriptor and Brute force mapping. The output of each piece of code is displayed as a title.
%pylab inline import cv2 import numpy as np img = cv2.imread('./img/nail.jpg') gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) imshow(gray, cmap=cm.gray)
The result looks something like this: http://i.stack.imgur.com/8eOTe.png
(In this example, I will cheat and use the same image to get key points and descriptors).
img1 = gray img2 = gray detector = cv2.FeatureDetector_create("GFTT") descriptor = cv2.DescriptorExtractor_create("SURF") matcher = pt1=(int(k1[m.queryIdx].pt[0]),int(k1[m.queryIdx].pt[1]))("FlannBased") # detect keypoints kp1 = detector.detect(img1) kp2 = detector.detect(img2) print '#keypoints in image1: %d, image2: %d' % (len(kp1), len(kp2))
key points in image1: 1000, image2: 1000
# descriptors k1, d1 = descriptor.compute(img1, kp1) k2, d2 = descriptor.compute(img2, kp2) print '#Descriptors size in image1: %s, image2: %s' % ((d1.shape), (d2.shape))
Descriptor size in image1: (1000, 64), image2: (1000, 64)
corresponds to: 1000
distance: min: 0,000
distance: average: 0.000
distance: max: 0,000
matches selected: 1000
#Plot h1, w1 = img1.shape[:2] h2, w2 = img2.shape[:2] view = zeros((max(h1, h2), w1 + w2, 3), uint8) view[:h1, :w1, 0] = img1 view[:h2, w1:, 0] = img2 view[:, :, 1] = view[:, :, 0] view[:, :, 2] = view[:, :, 0] for m in sel_matches: # draw the keypoints # print m.queryIdx, m.trainIdx, m.distance color = tuple([random.randint(0, 255) for _ in xrange(3)]) pt1=(int(k1[m.queryIdx].pt[0]),int(k1[m.queryIdx].pt[1])) pt2=(int(k2[m.queryIdx].pt[0]+w1),int(k2[m.queryIdx].pt[1])) cv2.line(view,pt1,pt2,color)
The result looks something like this: http://i.stack.imgur.com/8CqrJ.png