Point detection

What I'm trying to do is measure the thickness of the frame of the glasses. I had an idea to measure the thickness of the outline of the frame (maybe the best way?). I have still outlined the frames of the glasses, but there are gaps where the lines do not meet. I was thinking about using HoughLinesP, but I'm not sure if this is what I need.

So far I have completed the following steps:

  • Convert grayscale image
  • Create ROI around the eye / glasses
  • Image blur
  • Expand the image (do this to remove the thin glasses in the frame)
  • Detect the edge of the cannon.
  • Outlines found

Here are the results:

This is my code:

//convert to grayscale cv::Mat grayscaleImg; cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY ); //create ROI cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect); cv::imshow("roi", eyeAreaROI); //blur cv::Mat blurredROI; cv::blur(eyeAreaROI, blurredROI, Size(3,3)); cv::imshow("blurred", blurredROI); //dilate thin lines cv::Mat dilated_dst; int dilate_elem = 0; int dilate_size = 1; int dilate_type = MORPH_RECT; cv::Mat element = getStructuringElement(dilate_type, cv::Size(2*dilate_size + 1, 2*dilate_size+1), cv::Point(dilate_size, dilate_size)); cv::dilate(blurredROI, dilated_dst, element); cv::imshow("dilate", dilated_dst); //edge detection int lowThreshold = 100; int ratio = 3; int kernel_size = 3; cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size); //create matrix of the same type and size as ROI Mat dst; dst.create(eyeAreaROI.size(), dilated_dst.type()); dst = Scalar::all(0); dilated_dst.copyTo(dst, dilated_dst); cv::imshow("edges", dst); //join the lines and fill in vector<Vec4i> hierarchy; vector<vector<Point>> contours; cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE); cv::imshow("contours", dilated_dst); 

I'm not quite sure what the next steps will be or, as I said above, to use HoughLinesP and how to implement it. Any help is much appreciated!

+7
c ++ image-processing opencv hough-transform canny-operator
source share
2 answers

I think there are two main problems.

  • segment the glass frame

  • find the thickness of the segmented frame

Now I will post a way to segment the points of your sample image. This method may work for different images, but you may have to adjust the settings, or you could use some basic ideas.

Main idea: First, find the largest outline in the image, which should be in glasses. Secondly, find the two largest contours in the previous largest contour found, which should be points in the frame!

I use this image as an input (which should be your blurry but not expanded image):

enter image description here

 // this functions finds the biggest X contours. Probably there are faster ways, but it should work... std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount) { std::vector<std::vector<cv::Point>> sortedContours; if(amount <= 0) amount = contours.size(); if(amount > contours.size()) amount = contours.size(); for(int chosen = 0; chosen < amount; ) { double biggestContourArea = 0; int biggestContourID = -1; for(unsigned int i=0; i<contours.size() && contours.size(); ++i) { double tmpArea = cv::contourArea(contours[i]); if(tmpArea > biggestContourArea) { biggestContourArea = tmpArea; biggestContourID = i; } } if(biggestContourID >= 0) { //std::cout << "found area: " << biggestContourArea << std::endl; // found biggest contour // add contour to sorted contours vector: sortedContours.push_back(contours[biggestContourID]); chosen++; // remove biggest contour from original vector: contours[biggestContourID] = contours.back(); contours.pop_back(); } else { // should never happen except for broken contours with size 0?!? return sortedContours; } } return sortedContours; } int main() { cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE); cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later cv::imshow("input", input); //edge detection int lowThreshold = 100; int ratio = 3; int kernel_size = 3; cv::Mat canny; cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size); cv::imshow("canny", canny); // close gaps with "close operator" cv::Mat mask = canny.clone(); cv::dilate(mask,mask,cv::Mat()); cv::dilate(mask,mask,cv::Mat()); cv::dilate(mask,mask,cv::Mat()); cv::erode(mask,mask,cv::Mat()); cv::erode(mask,mask,cv::Mat()); cv::erode(mask,mask,cv::Mat()); cv::imshow("closed mask",mask); // extract outermost contour std::vector<cv::Vec4i> hierarchy; std::vector<std::vector<cv::Point>> contours; //cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE); cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE); // find biggest contour which should be the outer contour of the frame std::vector<std::vector<cv::Point>> biggestContour; biggestContour = findBiggestContours(contours,1); // find the one biggest contour if(biggestContour.size() < 1) { std::cout << "Error: no outer frame of glasses found" << std::endl; return 1; } // draw contour on an empty image cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1); cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1); cv::imshow("outer frame border", outerFrame); // now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;) cv::Mat glassesMask = outerFrame.clone(); cv::erode(glassesMask,glassesMask, cv::Mat()); cv::imshow("eroded outer",glassesMask); // after erosion if we dilate, it an Open-Operator which can be used to clean the image. cv::Mat cleanedOuter; cv::dilate(glassesMask,cleanedOuter, cv::Mat()); cv::imshow("cleaned outer",cleanedOuter); // use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only cv::Mat glassesInner; canny.copyTo(glassesInner, glassesMask); // there is small gap in the contour which unfortunately cant be closed with a closing operator... cv::dilate(glassesInner, glassesInner, cv::Mat()); //cv::erode(glassesInner, glassesInner, cv::Mat()); // this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps. cv::imshow("innerCanny", glassesInner); // extract contours from within the frame std::vector<cv::Vec4i> hierarchyInner; std::vector<std::vector<cv::Point>> contoursInner; //cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE); cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE); // find the two biggest contours which should be the glasses within the frame std::vector<std::vector<cv::Point>> biggestInnerContours; biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour if(biggestInnerContours.size() < 1) { std::cout << "Error: no inner frames of glasses found" << std::endl; return 1; } // draw the 2 biggest contours which should be the inner glasses cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1); for(unsigned int i=0; i<biggestInnerContours.size(); ++i) cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1); cv::imshow("inner frame border", innerGlasses); // since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-( cv::erode(innerGlasses,innerGlasses,cv::Mat() ); // remove the inner glasses from the frame mask cv::Mat fullGlassesMask = cleanedOuter - innerGlasses; cv::imshow("complete glasses mask", fullGlassesMask); // color code the result to get an impression of segmentation quality cv::Mat outputColors1 = inputColors.clone(); cv::Mat outputColors2 = inputColors.clone(); for(int y=0; y<fullGlassesMask.rows; ++y) for(int x=0; x<fullGlassesMask.cols; ++x) { if(!fullGlassesMask.at<unsigned char>(y,x)) outputColors1.at<cv::Vec3b>(y,x)[1] = 255; else outputColors2.at<cv::Vec3b>(y,x)[1] = 255; } cv::imshow("output", outputColors1); /* cv::imwrite("../Data/Output/face_colored.png", outputColors1); cv::imwrite("../Data/Output/glasses_colored.png", outputColors2); cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask); */ cv::waitKey(-1); return 0; } 

I get this result for segmentation:

enter image description here

The overlay in the original image will give you an impression of quality:

enter image description here

and inverse:

enter image description here

There are several complex parts to the code, and it has not yet been removed. I hope this is clear.

The next step is to calculate the thickness of the segmented frame. My suggestion is to compute the inverse mask distance transform. From this you will want to calculate the detection of the ridge or skeletonize the mask to find the ridge. After that, use the median value of the crest distances.

In any case, I hope this publication helps you a little, although this is not a solution.

+3
source share

Depending on the lighting, frame color, etc. this may or may not work, but what about a simple color definition to split the frame? The color of the frame will usually be much darker than human skin. You will get a binary image (black and white only), and by counting the number (area) of black pixels, you get the frame area.

Another possible way is to get better edge detection by adjusting / expanding / blurring / both until you get better contours. You will also need to distinguish the outline from the lenses, and then apply cvContourArea.

+1
source share

All Articles