IOS gets CGPoint from openCV cv :: Point

enter image description here

In the above image, we can see the point that is drawn on the image by some openCV algorithm .

I want to draw a UIView point at these points so that the user can crop it.

I do not get how I will access these points to add UIView points.

I tried to read cv::Point , but the value just differs (more) from the height and width of the coordinates.

 static cv::Mat drawSquares( cv::Mat& image, const std::vector<std::vector<cv::Point> >& squares ) { int max_X=0,max_Y=0; int min_X=999,min_Y=999; for( size_t i = 0; i < squares.size(); i++ ) { const cv::Point* p = &squares[i][0]; int n = (int)squares[i].size(); NSLog(@"Squares%d %d %d",n,p->x,p->y); polylines(image, &p, &n, 1, true, cv::Scalar(0,255,0), 3, cv::LINE_AA); } return image; } 

In the above code, the drawsquare method draws squares. I have NSLog coordinates of the x, y point, but these values ​​are not wrt for the device coordinate system.

Can someone help me how this can be achieved Or an alternative to my requirement.

thanks

+5
source share
2 answers

In fact, due to the size of the image, the coordinates are different in the map,

For example, if the image size is within the border of the screen, then there is no problem, you can directly use cvPoint as CGPoint,

But if in this case the image size is 3000 * 2464, which roughly corresponds to the size of the click of the camera, then you will apply some formula.

Below is the way I got from the internet and it helped me extract CGPoint from cvPoint when the image size is more dependent on our screen size

Get image scale factor

 - (CGFloat) contentScale { CGSize imageSize = self.image.size; CGFloat imageScale = fminf(CGRectGetWidth(self.bounds)/imageSize.width, CGRectGetHeight(self.bounds)/imageSize.height); return imageScale; } 

Suppose this is cvPoint (variable _pointA), so using the following formula u can extract it.

 tmp = CGPointMake((_pointA.frame.origin.x) / scaleFactor, (_pointA.frame.origin.y) / scaleFactor); 
0
source

This is in Swift 3. In the Swift class, you return cv::Points to:

  • Get the x and y sizes of the image you are recording from the camera. AV capture session.
  • Separate the x and y UIview that you use to render the image according to the image size of the capture session in X and Y
  • Multiply x and y coordinates by scaled x and y dimensions

 { let imageScaleX = imgView.bounds.width/(newCameraHelper?.dimensionX)! let imageScaleY = imgView.bounds.height/(newCameraHelper?.dimensionY)! for point in Squares { let x = point.x * imageScaleX let y = point.y * imageScaleY } } 
0
source

All Articles