In this question, I asked how to implement the blurring chain in one step.
Then I learned from the gaussian blur Wikipedia page that:
Applying several consecutive Gaussian blurring to an image has the same effect as applying one larger Gaussian blur, whose radius is the square root of the sum of the squares of the blur radii that were actually applied. For example, applying successive Gaussian blurring with radii 6 and 8 gives the same results as applying a single Gaussian blurring of radius 10, since sqrt {6 ^ {2} + 8 ^ {2}} = 10.
So, I thought that blur and singleBlur were the same in the following code:
cv::Mat firstLevel; float sigma1, sigma2; //intialize firstLevel, sigma1 and sigma2 cv::Mat blur = gaussianBlur(firstLevel, sigma1); blur = gaussianBlur(blur, sigma2); float singleSigma = std::sqrt(std::pow(sigma1,2)+std::pow(sigma2,2)); cv::Mat singleBlur = gaussianBlur(firstLevel, singleSigma); cv::Mat diff = blur != singleBLur; // Equal if no elements disagree assert( cv::countNonZero(diff) == 0);
But this assert does not work (and in fact, for example, the first blur line is different from the first of singleBlur ).
Why?
UPDATE:
After different comments request additional information, I will update the answer.
I am trying to parallelize this code. In particular, I am now focused on calculating all the blurring at all levels in advance. The serial code ( which works correctly ) is as follows:
vector<Mat> blurs ((par.numberOfScales+3)*levels, Mat()); cv::Mat octaveLayer = firstLevel; int scaleCycles = par.numberOfScales+2; //compute blurs at all layers (not parallelizable) for(int i=0; i<levels; i++){ blurs[i*scaleCycles+1] = octaveLayer.clone(); for (int j = 1; j < scaleCycles; j++){ float sigma = par.sigmas[j]* sqrt(sigmaStep * sigmaStep - 1.0f); blurs[j+1+i*scaleCycles] = gaussianBlur(blurs[j+i*scaleCycles], sigma); if(j == par.numberOfScales) octaveLayer = halfImage(blurs[j+1+i*scaleCycles]); } }
Where:
Mat halfImage(const Mat &input) { Mat n(input.rows/2, input.cols/2, input.type()); float *out = n.ptr<float>(0); for (int r = 0, ri = 0; r < n.rows; r++, ri += 2) for (int c = 0, ci = 0; c < n.cols; c++, ci += 2) *out++ = input.at<float>(ri,ci); return n; } Mat gaussianBlur(const Mat input, const float sigma) { Mat ret(input.rows, input.cols, input.type()); int size = (int)(2.0 * 3.0 * sigma + 1.0); if (size % 2 == 0) size++; GaussianBlur(input, ret, Size(size, size), sigma, sigma, BORDER_REPLICATE); return ret; }
I apologize for the terrible indexes above, but I tried to respect the original code system (which is terrible, for example, starting from 1 instead of 0 ). The code above has scaleCycles=5 and levels=6 , so a total of 30 blurring is generated.
This is the “single blur” version, where I first calculate sigma for each blur that needs to be calculated (following the Wikipedia formula), and then apply the blur (note that this is still serial, not parallelizable):
vector<Mat> singleBlurs ((par.numberOfScales+3)*levels, Mat()); vector<float> singleSigmas(scaleCycles); float acc = 0; for (int j = 1; j < scaleCycles; j++){ float sigma = par.sigmas[j]* sqrt(sigmaStep * sigmaStep - 1.0f); acc += pow(sigma, 2); singleSigmas[j] = sqrt(acc); } octaveLayer = firstLevel; for(int i=0; i<levels; i++){ singleBlurs[i*scaleCycles+1] = octaveLayer.clone(); for (int j = 1; j < scaleCycles; j++){ float sigma = singleSigmas[j]; std::cout<<"j="<<j<<" sigma="<<sigma<<std::endl; singleBlurs[j+1+i*scaleCycles] = gaussianBlur(singleBlurs[j+i*scaleCycles], sigma); if(j == par.numberOfScales) octaveLayer = halfImage(singleBlurs[j+1+i*scaleCycles]); } }
Of course, the code above generates 30 blurs also with the same parameters of the previous version.
And here is the code to see the difference between each signgleBlurs and blurs :
assert(blurs.size() == singleBlurs.size()); vector<Mat> blurDiffs(blurs.size()); for(int i=1; i<levels*scaleCycles; i++){ cv::Mat diff; absdiff(blurs[i], singleBlurs[i], diff); std::cout<<"i="<<i<<"diff rows="<<diff.rows<<" cols="<<diff.cols<<std::endl; blurDiffs[i] = diff; std::cout<<"blurs rows="<<blurs[i].rows<<" cols="<<blurs[i].cols<<std::endl; std::cout<<"singleBlurs rows="<<singleBlurs[i].rows<<" cols="<<singleBlurs[i].cols<<std::endl; std::cout<<"blurDiffs rows="<<blurDiffs[i].rows<<" cols="<<blurDiffs[i].cols<<std::endl; namedWindow( "blueDiffs["+std::to_string(i)+"]", WINDOW_AUTOSIZE );
Now, I saw that blurDiffs_1.jpg and blurDiffs_2.jpg are black, but suddendly from blurDiffs_3.jpg , until blurDiffs_29.jpg becomes whiter and whiter. For some reason, blurDiffs_30.jpg almost completely black.
The first (correct) version generates 1761 descriptors. The second (incorrect) version generates> 2.3k descriptors.
I cannot post blurDiffs matrices because (especially the first ones) are very large and post space is limited. I will post some samples. I will not send blurDiffs_1.jpg and blurDiffs_2.jpg because they are completely black. Note that due to halfImage images become smaller and smaller (as expected).
blurDiffs_3.jpg:

blurDiffs_6.jpg:

blurDiffs_15.jpg:

blurDiffs_29.jpg:

How the image is read:
Mat tmp = imread(argv[1]); Mat image(tmp.rows, tmp.cols, CV_32FC1, Scalar(0)); float *out = image.ptr<float>(0); unsigned char *in = tmp.ptr<unsigned char>(0); for (size_t i=tmp.rows*tmp.cols; i > 0; i--) { *out = (float(in[0]) + in[1] + in[2])/3.0f; out++; in+=3; }
Someone here suggested dividing diff by 255 to see the real difference, but I don’t understand why I understood it correctly.
If you need more information, let me know .