Collection of occlusion algorithms

The occlusion algorithm is necessary in CAD and the gaming industry. And they are different in the two industries that I think. My questions:

  • What occlusion algorithms are used respectively in the two standards?
  • and what is the difference?

I am working on the development of CAD, and the occlusion algorithm that we adopted - sets the identifier of the object as its color (integer), then renders the scene, finally reads a pixel to find out visible objects. Performance is not so good, so I want to get some good ideas here. Thanks.


After reading the anders, I want to clarify that the occlusion algorithms here mean "occlusal rejection" - they detect a visible surface or objects before sending them to the conveyor.

Using google, I found the algorithm in gamasutra . Any other good ideas or insights? Thanks.

+7
algorithm graphics
source share
4 answers

It seemed to me that so far most of the answers have been discussing only occlusion with an image. I'm not quite sure about CAD, but in games, occlusion starts at a much higher level, using BSP trees, oct trees and / or portal rendering to quickly identify objects that appear within the truncation view.

+3
source share

In games, occlusion is performed behind the scenes using one of two three-dimensional libraries: DirectX or OpenGL. To move on to features, occlusion is performed using Z buffer . Each point has a component Z, points that are closer to occlusion, which are further away.

The occlusion algorithm is usually performed on hardware using a special 3D graphics processing chip that implements DirectX or OpenGL functions. A game program using DirectX or OpenGL will draw objects in three-dimensional space, and the OpenGL / DirectX library displays the scene taking into account projection and occlusion.

+5
source share

The term you should look for is hidden surface removal .

Real-time rendering typically uses one simple method of covert surface removal: discarding the back surface. Each poly will have a โ€œsurface normalโ€ point, which is pre-computed at a given distance from the surface. When checking the angle of a normal surface relative to the camera, you should know that the surface is turned to the side, and therefore it does not need to be visualized.

Here are some interactive demos and flash-based explanations .

+2
source share

A hardware pixel Z-buffering is perhaps the easiest method, however in objects with high density you can still display the same pixel multiple times, which can become a performance problem in some situations. - You definitely need to make sure that you are not displaying or texturing thousands of objects that are simply not visible.

I am now thinking about this problem in one of my projects, I found that it stimulated several ideas: http://www.cs.tau.ac.il/~dcor/Graphics/adv-slides/short-image-based- culling.pdf

+1
source share

All Articles