What is the need for gamma correction?

I have problems to fully understand the need for gamma correction. Hope you guys can help me.

Suppose we want to display 256 neighboring pixels. These pixels should be a smooth gradient from black to white. To denote their colors, we use linear gray values ​​from 0..255. Due to the non-linearity of the human eye, a monitor should not simply turn these values ​​into linear brightness values. If neighboring pixels had brightness values ​​of (1/256)*I_max, (2/256)*I_max, et cetera , we would see too much difference in brightness between the two pixels in the darker region (the gradient would not be smooth).

Fortunately, the monitor has inverse non-linearity for the human eye. This means that if we put linear gray values ​​of 0..255 into the frame buffer, then the monitor turns them into non-linear brightness values ​​x ^ gamma. However, since our eye is non-linear, on the contrary, we perceive a smooth linear gradient. The non-linearity of the monitor and one of our eyes cancel each other out.

So why do we need gamma correction? I read in books that we always want the monitor to output linear brightness values. According to them, the non-linearity of the monitor should be compensated before writing gray values ​​to the frame buffer. This is done by gamma correction. However, my problem is that, as I understand it, we will not perceive linear brightness values ​​(i.e., we will not perceive a smooth, stable gradient) when the monitor produces linear brightness values.

As I understand it, it would be just perfect if we put linear gray values ​​in the frame buffer. The monitor turns these values ​​into non-linear brightness values, and our eye again perceives linear brightness values, since the eye is mutually non-linear. There was no need for gamma to adjust the gray values ​​in the frame buffer and no need to force the monitor to produce linear brightness values.

What is wrong with my way of looking at these things? Thanks

+7
source share
3 answers

Let me “resurrect this question, because now I am struggling with similar questions, and I think I found an answer - it can be useful to someone else. Or I can be wrong, and someone can tell me :)

I think there is nothing wrong with your thinking. The fact is that you do not need to correct gamma all the time if you know what you are doing. It depends on what you want to achieve. Let's see two different cases.

A) Light modeling (AKA rendering). You have a diffuse surface with a backlight pointing to it. Then the light intensity doubles.

Well. Let's see what happens in the real world in such a situation. Assuming a purely diffuse surface, the intensity of the reflected light will be the albedo of the surface times the intensity of the incoming light and the cosine of the angle of the incoming light and the normal. No difference. The thing is, when the intensity of the incoming light doubles, the intensity of the reflected light will also double. This is why light transport is called a linear process. Funny, you won’t perceive the surface twice as bright because our perception is non-linear (this is modeled by the so-called Stephen power law). Put it again: in the real world, reflected light doubles, but you don’t perceive it twice as bright.

Now, how would we simulate this? Well, if we have an sRGB texture with an albedo of the surface, we would need to linearize it (by eliminating it, which means applying 2.2 gamma). Now that it is linear, and we have the light intensity, we can use the formula indicated earlier to calculate the intensity of the reflected light. Since we are in linear space, doubling the intensity, we will double the output, as in the real world. Now we are gamma correcting our results. Because of this, when the rendered image is displayed on the screen, it will apply gamma, and therefore it will have a linear response, which means that the intensity of the light emitted by the screen will be twice as high when we simulate twice than when we imitate the first. Thus, the light that comes to your eyes from the screen will have double intensity. Just as it would be if you were looking at a real surface with real lights affecting it. You will not perceive the second render twice brighter, of course, but, again, and, as we said earlier, this is exactly what will happen in a real situation. The same behavior in the real world and in the simulation means that the simulation (rendering) was correct :)

B) In another case, you want the gradient that you want to “watch” (perceived by the AKA) to be linear.

Since you want the non-linear response of the screen to reduce our non-linear visual perception, you can generally skip gamma correction (as you suggest). Or, more precisely, continue to work in linear space and gamma correction, but create your gradient not with consecutive pixel values ​​(1,2,3 ... 255), which will be perceived non-linearly (due to Stephen), but the values ​​are converted inverse to our reaction of perceiving brightness (i.e., applying the normal index 1 / 0.5 = 2 to normalized values, which leads to the inverse Steven indicator for brightness).

In fact, if you see a linear gradient with gamma correction, for example, in http://scanline.ca/gradients/ , you do not perceive it as linear: you see much more changes at lower intensities than at higher ( as expected).

Well, at least that's my current understanding of the topic. Hope this helps anyone. And again, please, please, if this is wrong, I would be very grateful if anyone could point this out ...

+4
source

The problem is performing color calculations. For example, if you mix two colors, you need to use linear intensities to perform the calculations. To actually display the correct result, you need to convert the linear intensities back to gamma-corrected intensities.

How your eyes perceive intensity does not matter. For correct color calculations, they must be performed based on the physical principles of optics, which rely on linear brightness values. After you have calculated the color, you want these brightness values ​​to be displayed by your monitor no matter how it is perceived, so you must compensate for the fact that the monitor does not directly produce the colors you want.

+3
source

To actually answer a question that does not fit your way of looking at it, there is nothing wrong with that. It would be nice to have a linear framebuffer, but as you say, it doesn't really matter for an 8-bit linear frame buffer.

The fact that 8 bits are so easy to handle is pretty much the only excuse for gamma-compressed frame buffers and color coding (Think HTML # 888 - it would be useless to use # 333 for medium gray not # 888).

About the monitor - you want to be able to predict its response to your input, and you know from sRGB what it should be. This is usually all you need to know. Some people think that this is “correct” or something, if the monitor produces a “linear” output, which can be modeled if you compensate for the gamma monitor. I advise you to avoid such a setting that breaks down all applications that (respectively and safely) assume a standard gamma in favor of breaking the poor concretized linearity implying applications. Do not do this. Instead, fix the applications or reset them.

0
source

All Articles