Gaussian Blur - Standard Deviation, Radius and Core Size

I implemented a Gaussian flash shader in GLSL. I understand the basic concepts that underlie all of this: convolution, separating x and y using linearity, multiple passes to increase the radius ...

I still have a few questions:

What is the relationship between sigma and radius?

I read that sigma is equivalent to radius, I don’t see how sigma is expressed in pixels. Or is this "radius" just a name for sigma not related to pixels?

How to choose a sigma?

Given that I use several passes to increase sigma, how to choose a good sigma to get the sigma that I want to get with any pass? If the resulting sigma is equal to the square root of the sum of the squares of the sigma, and the sigma is equivalent to the radius, then what is a simple way to get any desired radius?

What is a good size for the kernel and how does it relate to sigma?

I have seen that most implementations use the 5x5 core. This is probably a good choice for quick implementation with decent quality, but is there any other reason to choose a different kernel size? How does sigma relate to kernel size? Should I find the best sigma so that the coefficients outside my core are insignificant and just normalize?

+8
image-processing gaussian shader glsl fragment-shader
source share
1 answer

What is the relationship between sigma and radius?

I think your terms here are interchangeable depending on your implementation. For most Gaussian blur implementations in glsl, they use the sigma value to determine the degree of blur. In the definition of Gaussian blur, the radius can be considered as the "blur radius". These terms are in pixel space.

How to choose a sigma?

This will determine how much blur you want, which corresponds to the size of the kernel to be used in the convolution. Larger values ​​will result in more blur.

The NVidia implementation uses the kernel size int (sigma * 3).

You can experiment using a smaller kernel with higher sigma values ​​for performance reasons. These are free parameters for experiments that determine how many pixels to use for modulation and how many corresponding pixels to include in the result.

What is a good size for the kernel, and how does this relate to sigma?

Based on the sigma value, you will want to choose the appropriate kernel size. The size of the kernel will determine how many pixels will be sampled during convolution, and the sigma will determine how many of them will be modulated.

You can post some code for a more detailed explanation. NVidia has a pretty good chapter on how to build a Gauss core . Take a look at Example 40-1.

+15
source share

All Articles