I have an iOS application where I read a video from AVFoundation and load each video frame into an OpenGL texture and display it on the screen in GLKView .
I have no problem copying the video buffer for each frame:
CVPixelBufferRef buffer = [someAVPlayerItemVideoOutput copyPixelBufferForItemTime:itemTime itemTimeForDisplay:nil];
But when I upload the video to openGL, I get different results depending on which method I use. My understanding of watching an Apple session at WWDC 2011 Session 414 Achievements in the OpenGL video is that the new CVOpenGLESTextureCacheCreateTextureFromImage function should be faster than just reading in the pixel buffer via glTexImage2D . However, when using a shader program with a lot of calculations, I get the same frame rate regardless of which method I use. Also, CVOpenGLESTextureCacheCreateTextureFromImage actually gives me a weird “block” distortion, while glTexImage2D doesn't. The distortion seems that random blocks on the screen are not updated with the last frame.
Here is the code I use to create the texture using the CVOpenGLESTexture method:
CVOpenGLESTextureRef videoFrameTexture = NULL; CVPixelBufferLockBaseAddress(buffer, 0); CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, buffer, NULL, GL_TEXTURE_2D, GL_RGBA, CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer), GL_BGRA, GL_UNSIGNED_BYTE, 0, &videoTexture); glBindTexture(GL_TEXTURE_2D, CVOpenGLESTextureGetName(videoFrameTexture)); CVBufferRelease(buffer); CFRelease(videoFrameTexture);
And here is an alternative way that I use to load via glTexImage2D :
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, CVPixelBufferGetBytesPerRow(buffer)/4, CVPixelBufferGetHeight(buffer), 0, GL_RGBA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(buffer));
Why am I getting weird distortion and can't improve frame rate using CVOpenGLESTextureCacheCreateTextureFromImage() ?