Problem
I got the point in my project where I show WebGLRenderTargets and use them as textures in my main scene. It works, but it looks like I'm doing a lot more work than necessary. My generated textures should only be 64x64, but because I use my main rendering (window width by window height) for both, it unreasonably displays WebGLRenderTargets with much higher resolution.
Maybe I'm wrong, but I believe that this increases both the processing necessary for drawing for each RenderTarget, and the processing necessary for the subsequent drawing of this large texture into the grid.
I tried using the second renderer, but it looks like I am getting this error when trying to use WebGLRenderTarget in renderer A after painting with rendering B:
WebGL: INVALID_OPERATION: bindTexture: object not from this context
Example
For reference, you can see my abstracted page here (warning: due to the very problem that I am asking, this page may be delayed). I run a simplex function on a plane in a secondary scene and divide it into sections using camera placement, and then apply segments to the fragment fragments via WebGLRenderTarget so that they are free, but separate parts.
Question
My assumption is that using the same rendering size is much less efficient than rendering for smaller rendering? And if so, what do you think would be the best solution for this? Is there any way to achieve this optimization?
rrowland
source share