Display camera stream on GLSurfaceView via SurfaceTexture

I am trying to display a camera stream in GLSurfaceView through SurfaceTexture passed to OpenGL ES 2.0 shaders.

I took inspiration from this post.

The image is complete, but it does not display correctly on my tablet. The screen seems to be divided into 2x2 parts. The image is displayed in the upper left part, while the other three parts are black.

I suspect the problem is related to my use of the transformation matrix returned by the sequence registered here

updateTexImage(); getTransformMatrix(...); 

I pass this matrix in the vertex shader to generate texture coordinates for the fragment shader.

vertex shader:

 attribute vec3 aPosition; uniform mat4 uMvpTransform; // Matrix retrieved by getTransformMatrix(...); uniform mat4 uTexMatTransform; varying vec2 vTexCoord; void main(void) { gl_Position = uMvpTransform *vec4(aPosition.xyz, 1); vec4 l_tex = uTexMatTransform*vec4(aPosition.xyz, 1); vTexCoord=l_tex.xy; } 

shader fragment:

 #extension GL_OES_EGL_image_external : require varying mediump vec2 vTexCoord; uniform samplerExternalOES uSampler; void main(void) { mediump vec4 l_tex = texture2D(uSampler, vTexCoord); gl_FragColor=l_tex; } 

The texture is attached to the following square:

  // Image container GLfloat l_vertices[] = { -1.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f }; 

Has anyone achieved something similar?

Edited November 3, 2012:

Vertex Shader Correction:

 attribute vec3 aPosition; attribute vec2 aTexCoord; uniform mat4 uMvpTransform; // Matrix retrieved by getTransformMatrix(...); uniform mat4 uTexMatTransform; varying vec2 vTexCoord; void main(void) { gl_Position = uMvpTransform *vec4(aPosition.xyz, 1); vec4 l_tex = uTexMatTransform*vec4(aTexCoord.xy,0, 1); vTexCoord=l_tex.xy; } 

from:

 // Texture GLfloat l_texCoord[] = { 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f 

};

+7
source share
2 answers

I have successfully used SurfaceTexture to draw camera frames on a custom opengl texture without using the transformation matrix provided by android.

Just try defining your vertex and texture indices the way you do for normal texture painting. Like this, for example.

 const GLfloat Vertices[] = {0.5, -0.5, 0, 0.5, 0.5, 0, -0.5, 0.5, 0, -0.5, -0.5, 0}; const GLubyte Indices[] = { 0, 1, 2, 2, 3, 0 }; const GLfloat Textures[] = { 1.,0., 0.,0., 0.,1., 1.,1. }; 

You should be able to use surfacetexture the way you use regular texture.

If you want to do some kind of three-dimensional projection, this is a good article on how to create a suitable MVP matrix. Which can be used to multiply by the position in the vertex shader.

+4
source

Since I had the same problem, although the question is a bit outdated, it might be worth a little talk about “why use the transformation matrix”.

The transformation matrix maps the specific coordinate set used by SurfaceTexture (or, more likely, the current video source streaming in SurfaceTexture ...) to standard OpenGL texture coordinates.

The reason it should be definitely defined is that although Startibartfast's answer works in one or more specific settings and can be easily and seductively implemented, it can create strange visualization errors when the same program runs on different devices , depending, for example, on the implementation of video drivers of each platform.

In my case, for example, the transformation matrix simply flips the content upside down (I use SurfaceTexture in combination with MediaPlayer on Nexus 7) instead of the result marked by Fabien R.

The correct way to use the matrix is ​​indicated in the edition of Fabien R dated November 3, 2012, which I report with some minor decorations (for example, using st-indices) and a small correction of the int / float mismatch:

 attribute vec2 uv; varying vec2 vTexCoord; uniform mat4 transformMatrix; vTexCoord = (transformMatrix * vec4(uv, 0.0, 1.0)).st; 

Hope this helps

+3
source

All Articles