OpenCV: color highlighting based on a Gaussian mixture model

I am trying to use opencv EM algorithm to highlight color. I use the following code based on an example in the opencv documentation:

cv::Mat capturedFrame ( height, width, CV_8UC3 ); int i, j; int nsamples = 1000; cv::Mat samples ( nsamples, 2, CV_32FC1 ); cv::Mat labels; cv::Mat img = cv::Mat::zeros ( height, height, CV_8UC3 ); img = capturedFrame; cv::Mat sample ( 1, 2, CV_32FC1 ); CvEM em_model; CvEMParams params; samples = samples.reshape ( 2, 0 ); for ( i = 0; i < N; i++ ) { //from the training samples cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N); cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1)); cv::Scalar sigma (30,30); cv::randn(samples_part,mean,sigma); } samples = samples.reshape ( 1, 0 ); //initialize model parameters params.covs = NULL; params.means = NULL; params.weights = NULL; params.probs = NULL; params.nclusters = N; params.cov_mat_type = CvEM::COV_MAT_SPHERICAL; params.start_step = CvEM::START_AUTO_STEP; params.term_crit.max_iter = 300; params.term_crit.epsilon = 0.1; params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS; //cluster the data em_model.train ( samples, Mat(), params, &labels ); cv::Mat probs; probs = em_model.getProbs(); cv::Mat weights; weights = em_model.getWeights(); cv::Mat modelIndex = cv::Mat::zeros ( img.rows, img.cols, CV_8UC3 ); for ( i = 0; i < img.rows; i ++ ) { for ( j = 0; j < img.cols; j ++ ) { sample.at<float>(0) = (float)j; sample.at<float>(1) = (float)i; int response = cvRound ( em_model.predict ( sample ) ); modelIndex.data [ modelIndex.cols*i + j] = response; } } 

My question is:

First , I want to extract each model, there are only five, and then save these corresponding pixel values ​​in five different matrices. In this case, I could have five different colors separately. Here I just got my indexes, is there any way to achieve their respective colors here? To make this easier, I can start by looking for the dominant color based on these five GMMs.

Secondly , here my data samples are β€œ100” and it takes about 3 seconds for them. But I want to do all this no more than 30 milliseconds. I know that extracting OpenCV background using GMM is very fast, below 20 ms, which means that it should be possible for me to do all this in 30 ms for all 600x800 = 480000 pixels. I found that the predict function is the most time consuming.

+7
source share
1 answer

First question:

To make color highlighting, you first need to train EM with your input pixels. After that, you simply loop all the input pixels and use the prediction function () to classify each of them. I have attached a small example that uses EM to separate color-based foreground and background. It shows you how to extract the dominant color (medium) for each gauss and how to access the original color of the pixels.

 #include <opencv2/opencv.hpp> int main(int argc, char** argv) { cv::Mat source = cv::imread("test.jpg"); //ouput images cv::Mat meanImg(source.rows, source.cols, CV_32FC3); cv::Mat fgImg(source.rows, source.cols, CV_8UC3); cv::Mat bgImg(source.rows, source.cols, CV_8UC3); //convert the input image to float cv::Mat floatSource; source.convertTo(floatSource, CV_32F); //now convert the float image to column vector cv::Mat samples(source.rows * source.cols, 3, CV_32FC1); int idx = 0; for (int y = 0; y < source.rows; y++) { cv::Vec3f* row = floatSource.ptr<cv::Vec3f > (y); for (int x = 0; x < source.cols; x++) { samples.at<cv::Vec3f > (idx++, 0) = row[x]; } } //we need just 2 clusters cv::EMParams params(2); cv::ExpectationMaximization em(samples, cv::Mat(), params); //the two dominating colors cv::Mat means = em.getMeans(); //the weights of the two dominant colors cv::Mat weights = em.getWeights(); //we define the foreground as the dominant color with the largest weight const int fgId = weights.at<float>(0) > weights.at<float>(1) ? 0 : 1; //now classify each of the source pixels idx = 0; for (int y = 0; y < source.rows; y++) { for (int x = 0; x < source.cols; x++) { //classify const int result = cvRound(em.predict(samples.row(idx++), NULL)); //get the according mean (dominant color) const double* ps = means.ptr<double>(result, 0); //set the according mean value to the mean image float* pd = meanImg.ptr<float>(y, x); //float images need to be in [0..1] range pd[0] = ps[0] / 255.0; pd[1] = ps[1] / 255.0; pd[2] = ps[2] / 255.0; //set either foreground or background if (result == fgId) { fgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0); } else { bgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0); } } } cv::imshow("Means", meanImg); cv::imshow("Foreground", fgImg); cv::imshow("Background", bgImg); cv::waitKey(0); return 0; } 

I checked the code with the following image and it works pretty well.

enter image description here

Second question:

I noticed that the maximum number of clusters has a huge impact on performance. Therefore, it is better to set this to a very conservative value, instead of leaving it blank or setting it to the number of samples, as in your example. In addition, the documentation refers to an iterative procedure for repeatedly optimizing a model with less limited parameters. Perhaps this gives you some acceleration. To learn more, check out the docs inside the example code that is provided for the train () here .

+11
source

All Articles