I want to use the distanceTransform () function to find the minimum distance from non-zero pixels to zero zeros, and also the position is the closest zero pixel. I call the second version of the function with the labelType flag set to DIST_LABEL_PIXEL. Everything works fine, and I get the distances and indices of the nearest zero pixels.
Now I want to convert the indexes back to pixel locations, and I thought the indexing would look like idx = (row * cols + col) or something like that, but I needed to find out that OpenCV just counts the zero pixels and using this score as an index. So if I get 123 as the index of the nearest pixel, that means the 123rd zero pixel is the closest.
How does OpenCV count them? Perhaps in different ways?
Is there an efficient way to display indexes in locations? Obviously, I could recount them and keep track of the counts and positions if I know how OpenCV evaluates them, but that seems silly and not very efficient.
Is there a good reason to use the indexing they used? I mean, are there any advantages over using absolute indexing?
Thanks in advance.
EDIT:
, , :
Mat mask = Mat::ones(100, 100, CV_8U);
mask.at<uchar>(50, 50) = 0;
Mat dist, labels;
distanceTransform(mask, dist, labels, CV_DIST_L2, CV_DIST_MASK_PRECISE, DIST_LABEL_PIXEL);
cout << labels.at<int>(0,0) << endl;
, 1, , (50,50) ?