I would like to simulate camera exposure control in my Three.js app, and I'm not sure how I should approach it.
So my question is: how could I implement or simulate exposure control so that I could adjust the camera or visualizer to have a clear image of very darkly lit scenes or not to overexpose scenes with very bright lights? Ideally, it should work as close as possible to how the real camera works (more like adjusting the ISO sensitivity, not the aperture / shutter).
I am currently using various custom effects processing effects EffectComposer. Some of them allow you to control brightness, contrast, etc. After rendering. Obviously, this is not good, except for fine-tuning, because it is impossible to restore the blown glare or black shadows from the already made standard image of the dynamic range.
Some approaches that I reviewed:
- The hand adjusts each light to get the desired exposure. This is my current method that I would like to get rid of.
- Already mentioned postprocessing. This is too limited and leads to poor image quality in extreme situations.
- I see some examples of HDR rendering where exposure control is possible. They are implemented using custom HDR shaders, and I would like to use the standard system and the MeshPhongMaterial shaders of the Three.js standard as much as possible due to all other built-in functions such as shadows, etc. Etc. I think it’s not so easy to use these AND functions and use some kind of HDR shader approach? Please note that I do not need high dynamic range, only adjustable exposure.
- Deception, applying some kind of intensity factor for all the lights in the scene. This is difficult to implement realistically, since the algorithm for reducing light, etc. It brings additional complexity, which I don’t quite understand, and I don’t think it’s easy to create a result that looks the same as if the sensitivity of the camera was raised / lowered.
- WebGLRenderer ( ). , , - ( , ?).