Interest Ask.
Background material
Having multiple threads with independent contexts is very common. Each application using hardware-accelerated rendering representation has a GLES context in the main thread, so any application using GLSurfaceView (or rolls its own EGL with SurfaceView or TextureView and an independent rendering stream) actively uses several contexts.
Each TextureView has a SurfaceTexture inside it, so any application that uses multiple TextureViews has multiple SurfaceTextures in a single thread. (In the structure in its implementation, which caused problems with several TextureViews, but it was a high-level problem, not a driver problem.)
SurfaceTexture, a / k / a GLConsumer, does not do much processing. When a frame comes from a source (in your case, a camera), it uses some EGL functions to "wrap" the buffer as an "external" texture. You cannot perform these EGL operations without an EGL context to work, so you need to bind SurfaceTexture to one, and why you cannot fit a new frame into a texture if the current context is current. You can see from the implementation of updateTexImage() that it does a lot of mysterious things with buffer queues and textures and fences, but none of them require copying pixel data. The only system resource that you really associate is the RAM, which is not insignificant if you shoot high-resolution images.
Connections
An EGL context can move between threads, but can only be "current" one thread at a time. Simultaneous access from multiple threads will require a lot of unwanted synchronization. This stream has only one "current" context. The OpenGL API evolved from single-threaded with a global state to multi-threaded, and instead of rewriting the APIs, they simply dragged the state to a local thread store ... hence the concept of "current".
You can create EGL contexts that share certain things between them, including textures, but if these contexts are in different streams, you have to be very careful when updating textures. Grafika is a good example of misuse .
SurfaceTextures are built on top of BufferQueues, which have a producer and consumer structure. The most interesting thing about SurfaceTextures is that they include both sides, so you can feed data in one direction and pull it out from the inside in one process (unlike, say, SurfaceView, where the consumer is far away). Like all Surface materials, they are built on top of IPC Binder, so you can feed the surface from one thread and safely updateTexImage() in another thread (or process). The API is designed so that you create a SurfaceTexture on the consumer side (your process), and then pass a link to the manufacturer (for example, a camera that mainly works in the mediaserver process).
Implementation
You will be causing a bunch of overhead if you constantly plug and unplug BufferQueues. Therefore, if you want to have three SurfaceTextures receiving buffers, you will need to connect all three to Camera2's output, and let them all receive a “buffer transfer”. Then you updateTexImage() around. Since SurfaceTexture BufferQueue is in “async” mode, you should always get the latest frame with every call, without having to “drop” the queue.
This layout was not really possible until the Lollipop era BufferQueue multi-mode output and Camera2 introduction changed, so I don't know if I tried this approach before.
All SurfaceTextures will be bound to the same EGL context, ideally in a thread other than the View UI thread, so you don't have to struggle with what is current. If you want to access the texture from the second context in another thread, you will need to use the SurfaceTexture attach / detash API , which explicitly supports this approach:
A new OpenGL ES texture object is created and populated with the SurfaceTexture image frame that was current the last time detachFromGLContext () was called.
Remember that switching contexts EGL is a user-side operation and does not affect the connection to the camera, which is a manufacturer-side operation. The overhead associated with moving a SurfaceTexture between contexts should be negligible - less than updateTexImage() , but you need to take the usual steps to ensure synchronization when communicating between threads.
Too bad ImageReader does not have a call to getTimestamp() , as this will greatly simplify the combination of buffers with the camera.
Conclusion
Using multiple SurfaceTextures to output a buffer is possible, but difficult. I see the potential advantage of the ping-pong buffer approach, where one ST is used to receive a frame in stream / context A, while another ST is used for rendering in stream / context B, but since you work in real I don’t think that this value is in extra buffering if you are not trying to cancel the time.
As always, it is recommended that you read the graphical architecture at the Android system level .