Render YpCbCr iPhone 4 Camera Frame for OpenGL ES 2.0 Texture in iOS 4.3

I'm trying to make my own flat texture image of OpenGL ES 2.0 in iOS 4.3 on iPhone 4. However, the texture is all black. My camera is configured as such:

[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; 

and I pass the pixel data into my texture as follows:

 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame)); 

My fragement shaders:

 varying highp vec2 textureCoordinate; uniform sampler2D videoFrame; void main() { lowp vec4 color; color = texture2D(videoFrame, textureCoordinate); lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175); convertedColor += 1.164 * color.g; // Y convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V gl_FragColor = vec4(convertedColor, 1.0); } 

and my vertex shader

 attribute vec4 position; attribute vec4 inputTextureCoordinate; varying vec2 textureCoordinate; void main() { gl_Position = position; textureCoordinate = inputTextureCoordinate.xy; } 

This works great when I work with a BGRA image, and my fragment shader only

 gl_FragColor = texture2D(videoFrame, textureCoordinate); 

What if something is missing me here? Thanks!

+7
source share
2 answers

OK We have a working success. The key passed Y and UV as two separate textures to the fragment shader. Here is the final shader:

 #ifdef GL_ES precision mediump float; #endif varying vec2 textureCoordinate; uniform sampler2D videoFrame; uniform sampler2D videoFrameUV; const mat3 yuv2rgb = mat3( 1, 0, 1.2802, 1, -0.214821, -0.380589, 1, 2.127982, 0 ); void main() { vec3 yuv = vec3( 1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625), texture2D(videoFrameUV, textureCoordinate).r - 0.5, texture2D(videoFrameUV, textureCoordinate).a - 0.5 ); vec3 rgb = yuv * yuv2rgb; gl_FragColor = vec4(rgb, 1.0); } 

You will need to create your textures as follows:

 int bufferHeight = CVPixelBufferGetHeight(cameraFrame); int bufferWidth = CVPixelBufferGetWidth(cameraFrame); glBindTexture(GL_TEXTURE_2D, videoFrameTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0)); glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV); glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1)); 

and then pass them as follows:

 glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, videoFrameTexture); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV); glActiveTexture(GL_TEXTURE0); glUniform1i(videoFrameUniform, 0); glUniform1i(videoFrameUniformUV, 1); 

Boy, I'm relieved!

PS The values ​​for the yuv2rgb matrix are here http://en.wikipedia.org/wiki/YUV , and I copied the code here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to find out how to get the right yuv values ​​to start with.

+10
source

The code attempts to convert 32-bit color to 444-plus-unused bytes in RGBA. This will not work too well. I don’t know anything about what YUVA displays for one.

Also, I think the returned alpha channel is 0 for the BGRA camera output, not 1, so I'm not sure why it works (IIRC, to convert it to CGImage, you need to use AlphaNoneSkipLast).

The 420-biplanar derivation is structured like this:

  • A heading telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane() and friends)
  • Y plane: height and time; bytes_per_row_1 & times; 1 byte
  • Density Cb, Cr: height / 2 * bytes_per_row_2 & times; 2 bytes (2 bytes per 2x2 block).

bytes_per_row_1 approximately width and bytes_per_row_2 is approximately width/2 , but you want to use CVPixelBufferGetBytesPerRowOfPlane () for reliability (you can also check the results..GetHeightOfPlane and ... GetWidthOfPlane).

You may be able to process it as a texture with a height of 1 component width * and a texture of 2 component width / 2 * height / 2. You will probably want to check the bytes per line and handle the case where it is not just the width * number of components (although this is probably true for most video modes). AIUI, you will also want to clear the GL context before calling CVPixelBufferUnlockBaseAddress ().

Alternatively, you can copy all of this into memory in the expected format (optimizing this loop can be a bit complicated). The advantage of copying is that you don’t have to worry about accessing memory after you unlock the pixel buffer.

+2
source

All Articles