Three.js - What is the most efficient way to render a WebGLRenderTarget texture?

Problem

I got the point in my project where I show WebGLRenderTargets and use them as textures in my main scene. It works, but it looks like I'm doing a lot more work than necessary. My generated textures should only be 64x64, but because I use my main rendering (window width by window height) for both, it unreasonably displays WebGLRenderTargets with much higher resolution.

Maybe I'm wrong, but I believe that this increases both the processing necessary for drawing for each RenderTarget, and the processing necessary for the subsequent drawing of this large texture into the grid.

I tried using the second renderer, but it looks like I am getting this error when trying to use WebGLRenderTarget in renderer A after painting with rendering B:

WebGL: INVALID_OPERATION: bindTexture: object not from this context

Example

For reference, you can see my abstracted page here (warning: due to the very problem that I am asking, this page may be delayed). I run a simplex function on a plane in a secondary scene and divide it into sections using camera placement, and then apply segments to the fragment fragments via WebGLRenderTarget so that they are free, but separate parts.

Question

My assumption is that using the same rendering size is much less efficient than rendering for smaller rendering? And if so, what do you think would be the best solution for this? Is there any way to achieve this optimization?

+7
source share
1 answer

The size parameters in renderer.setSize() are used by the renderer to set the viewport when rendering to the screen.

When rendering renders an off-screen rendering object, the size of the processed texture is set by the renderTarget.width and renderTarget.height .

So the answer to your question is that you can use the same renderer for both; there is no inefficiency.

+4
source

All Articles