Voxel optimization suggestions (e.g. Minecraft)?
As a fun project (and to make my son Minecraft-adict excited for programming), I create a 3D mechanism similar to Minecraft using C # .NET4.5.1, OpenGL and GLSL 4.x.
Right now my world is built using pieces. Tsanki are stored in a dictionary where I can select them based on a 64-bit key X | Z<<32 X | Z<<32 . This allows you to create an “endless” world that can cache and cache chunks.
Each piece consists of an array of 16x16x16 block segments. Starting at level 0, the root root, it can go as high as you want (unlike minecraft, where, in my opinion, the limit is 256).
Pieces are queued for generation in a separate thread when they appear and should be displayed. This means that the pieces may not be displayed immediately. In practice, you will not notice this. NOTE. I do not expect them to be generated; they simply will not be visible right away.
When a fragment should be visualized for the first time VBO ( glGenBuffer , GL_STREAM_DRAW , etc.) for this fragment that contains visible / external faces (neighboring pieces are also checked). [This means that the fragment potentially needs to be reconfigured when the neighbor changes]. When self-consistent, first the opaque faces are tessellated for each segment, and then transparent. Each segment knows where it begins within this vertex array and how many vertices it has, both for opaque faces and for transparent faces.
Textures are taken from the texture of the array.
When rendering;
- First, I take the bounding box of the truncated cone and overlay the map on the piece grate. Using this knowledge, I select every piece that is inside a truncated cone and at a certain distance from the camera.
- Now I'm making a distance in pieces.
- After that, I define the ranges (index, length) of the pieces of segments that are actually visible. NOW I know for sure which segments (and which vertex ranges) are "at least partially." The only extra segments that I have are those that are hidden behind the mountains or "sometimes" deep underground.
- Then I start rendering ... first I visualize the opaque edges [rejection and depth activated, alpha test and blending disabled] front to back using known vertex ranges. Then I render the transparent faces forward [blend enabled]
Now ... does anyone know a way to improve this and allow the dynamic generation of an infinite world? I am now reaching ~ 80fps @ 1920x1080 , ~ 120fps @ 1024x768 (screenshots: http://i.stack.imgur.com/t4k30.jpg , http://i.stack.imgur.com/prV8X.jpg ) on average 2 , 7 GHz i7 laptop with ATI HD8600M gfx card. I think it should be possible to increase the number of frames. And I think that I should, because I want to add the essence of AI, sound and make a relief and mirror image. Can using occlusion requests help me? ... which I cannot imagine based on the nature of the segments. I have already minimized the creation of objects, so there is no "new object" everywhere. Also, since performance does not change when using debug or release mode, I don’t think this is code, but it’s more suited to the problem.
edit: I was thinking about using GL_SAMPLE_ALPHA_TO_COVERAGE , but it doesn't seem to work?
gl.Enable(GL.DEPTH_TEST); gl.Enable(GL.BLEND);
source share