Using an adaptive threshold mask?

I am writing a small C ++ program using the OpenCV-2.3 API. I have a problem handling adaptive threshold using a non-rectangular mask.

So far, I have been performing an adaptive threshold for the entire image and then masking it. I understand that in my case this was a mistake, because masked pixels were used to calculate the threshold of my pixels of interest (while I just want to exclude the first from the analysis) ... However, unlike functions like cv: : norm, cv :: adaptiveThreshold does not explicitly support the mask.

Do you know any obvious solution or workaround? Thanks so much for your suggestions, Quentin.

+7
source share
2 answers

I wrote Python code (sorry, not C ++) that would mask the adaptive threshold value. It is not very fast, but it does what you want and you can use it as the basis for C ++ code. It works as follows:

  • Sets masked image pixels to zero.
  • Defines the number of unmasked neighbors in the convolution block for each pixel.
  • Converts and averages it by the number of invisible neighbors inside the block. This gives an average value in a block of pixel neighborhood.
  • Thresholds, comparing the image with the average values โ€‹โ€‹of the neighborhood, mean_conv
  • Adds the hidden (non-threshold) part of the image back.

enter image description here

Images show the initial image, mask, final processed image.

Here is the code:

 import cv import numpy from scipy import signal def thresh(a, b, max_value, C): return max_value if a > b - C else 0 def mask(a,b): return a if b > 100 else 0 def unmask(a,b,c): return b if c > 100 else a v_unmask = numpy.vectorize(unmask) v_mask = numpy.vectorize(mask) v_thresh = numpy.vectorize(thresh) def block_size(size): block = numpy.ones((size, size), dtype='d') block[(size - 1 ) / 2, (size - 1 ) / 2] = 0 return block def get_number_neighbours(mask,block): '''returns number of unmasked neighbours of every element within block''' mask = mask / 255.0 return signal.convolve2d(mask, block, mode='same', boundary='symm') def masked_adaptive_threshold(image,mask,max_value,size,C): '''thresholds only using the unmasked elements''' block = block_size(size) conv = signal.convolve2d(image, block, mode='same', boundary='symm') mean_conv = conv / get_number_neighbours(mask,block) return v_thresh(image, mean_conv, max_value,C) image = cv.LoadImageM("image.png", cv.CV_LOAD_IMAGE_GRAYSCALE) mask = cv.LoadImageM("mask.png", cv.CV_LOAD_IMAGE_GRAYSCALE) #change the images to numpy arrays original_image = numpy.asarray(image) mask = numpy.asarray(mask) # Masks the image, by removing all masked pixels. # Elements for mask > 100, will be processed image = v_mask(original_image, mask) # convolution parameters, size and C are crucial. See discussion in link below. image = masked_adaptive_threshold(image,mask,max_value=255,size=7,C=5) # puts the original masked off region of the image back image = v_unmask(original_image, image, mask) #change to suitable type for opencv image = image.astype(numpy.uint8) #convert back to cvmat image = cv.fromarray(image) cv.ShowImage('image', image) #cv.SaveImage('final.png',image) cv.WaitKey(0) 

After writing this, I found this great link that has a good explanation with lots of sample images, I used their text image for the above example.

Note. Unnecessary masks do not seem to be respected by scipy signal.convolve2d() , so the above workarounds were necessary.

+3
source

According to your advice, and after reading your link, I wrote this little C ++ function: This is only 1.5 slower than the adaptive threshold, but I can probably improve it.

 void adaptiveThresholdMask(const cv::Mat src,cv::Mat &dst, double maxValue, cv::Mat mask, int thresholdType, int blockSize, double C){ cv::Mat img, invertMask, noN, conv,kernel(cv::Size(blockSize,blockSize),CV_32F); /* Makes a image copy of the source image*/ src.copyTo(img); /* Negates the mask*/ cv::bitwise_not(mask,invertMask); /* Sets to 0 all pixels out of the mask*/ img = img-invertMask; /* The two following tasks are both intensive and * can be done in parallel (here with OpenMP)*/ #pragma omp parallel sections { { /* Convolves "img" each pixels takes the average value of all the pixels in blocksize*/ cv::blur(img,conv,cv::Size(blockSize,blockSize)); } #pragma omp section { /* The result of bluring "mask" is proportional to the number of neighbours */ cv::blur(mask,noN,cv::Size(blockSize,blockSize)); } } /* Makes a ratio between the convolved image and the number of * neighbours and subtracts from the original image*/ if(thresholdType==cv::THRESH_BINARY_INV){ img=255*(conv/noN)-img; } else{ img=img-255*(conv/noN); } /* Thresholds by the user defined C*/ cv::threshold(img,dst,C,maxValue,cv::THRESH_BINARY); /* We do not want to keep pixels outside of the mask*/ cv::bitwise_and(mask,dst,dst); } 

Thanks again

+3
source

All Articles