One direct solution would be to use Screen-Space Ambient Occlusion approaches. There you try to evaluate occlusion based on a sample of the neighborhood. One of my approaches is SSDO , which is directly aimed at creating shadows in the screen space. You will likely get a lot of artifacts in complex scenes. The advantage is that SSDO also adds some global lighting effects.
I think most games / engines try to overcome such problems with preprocessing steps.
Static lighting: if your light source does not move (glows in buildings ...), calculate Lightmaps or some additional vertex attributes that contain light.
Adjust the light: just adjust the drop distance or intensity or location until there is noticeable bleeding.
Some of your own ideas: depending on your idea of ββlight (sphere / disk?), You can calculate the cropped shape for the lights. The pixels behind the wall do not lie inside the new volume of light and are not illuminated in this way. If you cannot arbitrarily shape your volume of light, you can probably add one or two planes to each light defining the walls. These planes can be undefined for most light sources and can only be pressed on the GPU to illuminate near a wall. How can I check the pixel on which side it lies during the lighting process for the corresponding light.
source share