How to find the corners of a Rect object in openCV?

I am using the openCV library on the Android platform. I have successfully discovered the largest rectangle from the image, but since my application will be used for the scanning purpose, I also want to be able to change perspectives.

I know how to apply the Transform perspective and warpPerspectiveTransform, but for this I will need the corners of the rectangle for the starting points.

It seems very easy to find the corners, given the fact that we have the coordinates of the first corner (top left) and the width / height associated with the Rect object, but the problem is that for a rotating rectangle (normal bounding rail, but the sides are not parallel to the axis) , these values ​​vary greatly. In this case, they store the values ​​corresponding to another rectangle having sides parallel to the axis and covering the rotated rectangle, so I cannot detect the corners of the actual rectangle.

I also want to make a comparison between the two algorithms to detect a sheet from an image.

  • Krus region β†’ Largest outline β†’ largest rectangle β†’ find angles β†’ change perspective

  • Conical rib β†’ hough lines β†’ intersection of lines β†’ change of perspective

What I want to ask is given, if we have a Rect object, how to get all the corners of this rectangle?

Thanks in advance.

+8
android algorithm image-processing opencv feature-detection
source share
2 answers

I am very interested in answering my question! It was easy, but it happens when you start with something with not-so-relevant documentation.

I tried to get the corners of a common rectangle that was not defined in the openCV implementation and therefore was almost impossible.

I followed the standard code in stackoverflow for the greatest area detection. and angles can be easily detected using the most approximate.

// convert the image to black and white

Imgproc.cvtColor(imgSource, imgSource, Imgproc.COLOR_BGR2GRAY); //convert the image to black and white does (8 bit) Imgproc.Canny(imgSource, imgSource, 50, 50); //apply gaussian blur to smoothen lines of dots Imgproc.GaussianBlur(imgSource, imgSource, new org.opencv.core.Size(5, 5), 5); //find the contours List<MatOfPoint> contours = new ArrayList<MatOfPoint>(); Imgproc.findContours(imgSource, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE); double maxArea = -1; int maxAreaIdx = -1; Log.d("size",Integer.toString(contours.size())); MatOfPoint temp_contour = contours.get(0); //the largest is at the index 0 for starting point MatOfPoint2f approxCurve = new MatOfPoint2f(); MatOfPoint largest_contour = contours.get(0); //largest_contour.ge List<MatOfPoint> largest_contours = new ArrayList<MatOfPoint>(); //Imgproc.drawContours(imgSource,contours, -1, new Scalar(0, 255, 0), 1); for (int idx = 0; idx < contours.size(); idx++) { temp_contour = contours.get(idx); double contourarea = Imgproc.contourArea(temp_contour); //compare this contour to the previous largest contour found if (contourarea > maxArea) { //check if this contour is a square MatOfPoint2f new_mat = new MatOfPoint2f( temp_contour.toArray() ); int contourSize = (int)temp_contour.total(); MatOfPoint2f approxCurve_temp = new MatOfPoint2f(); Imgproc.approxPolyDP(new_mat, approxCurve_temp, contourSize*0.05, true); if (approxCurve_temp.total() == 4) { maxArea = contourarea; maxAreaIdx = idx; approxCurve=approxCurve_temp; largest_contour = temp_contour; } } } Imgproc.cvtColor(imgSource, imgSource, Imgproc.COLOR_BayerBG2RGB); sourceImage =Highgui.imread(Environment.getExternalStorageDirectory(). getAbsolutePath() +"/scan/p/1.jpg"); double[] temp_double; temp_double = approxCurve.get(0,0); Point p1 = new Point(temp_double[0], temp_double[1]); //Core.circle(imgSource,p1,55,new Scalar(0,0,255)); //Imgproc.warpAffine(sourceImage, dummy, rotImage,sourceImage.size()); temp_double = approxCurve.get(1,0); Point p2 = new Point(temp_double[0], temp_double[1]); // Core.circle(imgSource,p2,150,new Scalar(255,255,255)); temp_double = approxCurve.get(2,0); Point p3 = new Point(temp_double[0], temp_double[1]); //Core.circle(imgSource,p3,200,new Scalar(255,0,0)); temp_double = approxCurve.get(3,0); Point p4 = new Point(temp_double[0], temp_double[1]); // Core.circle(imgSource,p4,100,new Scalar(0,0,255)); List<Point> source = new ArrayList<Point>(); source.add(p1); source.add(p2); source.add(p3); source.add(p4); Mat startM = Converters.vector_Point2f_to_Mat(source); Mat result=warp(sourceImage,startM); return result; 

and the function used to transform perspective is given below:

  public Mat warp(Mat inputMat,Mat startM) { int resultWidth = 1000; int resultHeight = 1000; Mat outputMat = new Mat(resultWidth, resultHeight, CvType.CV_8UC4); Point ocvPOut1 = new Point(0, 0); Point ocvPOut2 = new Point(0, resultHeight); Point ocvPOut3 = new Point(resultWidth, resultHeight); Point ocvPOut4 = new Point(resultWidth, 0); List<Point> dest = new ArrayList<Point>(); dest.add(ocvPOut1); dest.add(ocvPOut2); dest.add(ocvPOut3); dest.add(ocvPOut4); Mat endM = Converters.vector_Point2f_to_Mat(dest); Mat perspectiveTransform = Imgproc.getPerspectiveTransform(startM, endM); Imgproc.warpPerspective(inputMat, outputMat, perspectiveTransform, new Size(resultWidth, resultHeight), Imgproc.INTER_CUBIC); return outputMat; } 
+11
source share
 1. find the coordinate of left ,right, top,bottom point of the rect 2. min_x = min(left.x right.x,top.x,bottom.x) min_y = min(left.y right.y,top.y,bottom.y) max_x = max(left.x right.x,top.x,bottom.x) max_y = max(left.y right.y,top.y,bottom.y) 3. the corners coordinate is the point with the min_x, min_y ,max_x, max_y 
+2
source share

All Articles