Ios - video created from texture cache is black

I am trying to use Brad Larson answer to efficiently process video on iOS. This answer is about how to efficiently get the pixel output buffer without using glReadPixels . From what I understood, you need to load the pixel buffer from AVAssetWriterInputPixelBufferAdaptor pixelBufferPool, link it, and then after each rendering cycle just call

 CVPixelBufferUnlockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0)) writerAdaptor?.append(buffer, withPresentationTime: currentTime) 

However, when I try to do this, the output video is black. The original answer shows code snippets, not a complete setup. I also looked at GPUImage, but surprisingly uses glReadPixels : https://github.com/BradLarson/GPUImage/blob/167b0389bc6e9dc4bb0121550f91d8d5d6412c53/framework/Source/Mac/GPUImageMovie0101

Here is a slightly simplified version of the code I'm trying to find:

1) Start recording a camera

  override func viewDidLoad() { // Start the camera recording session = AVCaptureSession() session.sessionPreset = AVCaptureSessionPreset1920x1080 // Input setup. device = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo).first as? AVCaptureDevice input = try? AVCaptureDeviceInput(device: device) session.addInput(input) // Output setup. let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true output.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange ] session.addOutput(output) output.setSampleBufferDelegate(self, queue: .main) setUpWriter() } 

2) Launch a video recorder

  func setUpWriter() { // writer: AVAssetWriter // input: AVAssetWriterInput let attributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String: 400, kCVPixelBufferHeightKey as String: 720, AVVideoScalingModeKey as String: AVVideoScalingModeFit, kCVPixelFormatOpenGLESCompatibility as String: true, kCVPixelBufferIOSurfacePropertiesKey as String: [:], ] let adaptor = AVAssetWriterInputPixelBufferAdaptor( assetWriterInput: input, sourcePixelBufferAttributes: attributes) setUpTextureCache(in: adaptor.bufferPool!) writer.add(input) writer.startWriting() writer.startSession(atSourceTime: currentTime) } 

3) configure the cache, as in https://stackoverflow.com/a/3/29/29/29

  func setUpTextureCache(in pool: CVPixelBufferPool) { var renderTarget: CVPixelBuffer? = nil var renderTexture: CVOpenGLESTexture? = nil var coreVideoTextureCache: CVOpenGLESTextureCache? = nil var err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, nil, context, nil, &coreVideoTextureCache) if err != kCVReturnSuccess { print("Error at CVOpenGLESTextureCacheCreate \(err)") } err = CVPixelBufferPoolCreatePixelBuffer( nil, bufferPool, &renderTarget ) if err != kCVReturnSuccess { print("Error at CVPixelBufferPoolCreatePixelBuffer \(err)") } err = CVOpenGLESTextureCacheCreateTextureFromImage( kCFAllocatorDefault, coreVideoTextureCache!, renderTarget!, nil, GLenum(GL_TEXTURE_2D), GL_RGBA, GLsizei(400), GLsizei(720), GLenum(GL_BGRA), GLenum(GL_UNSIGNED_BYTE), 0, &renderTexture ) if err != kCVReturnSuccess { print("Error at CVOpenGLESTextureCacheCreateTextureFromImage \(err)") } glBindTexture(CVOpenGLESTextureGetTarget(renderTexture!), CVOpenGLESTextureGetName(renderTexture!)) glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GLfloat(GL_CLAMP_TO_EDGE)) glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GLfloat(GL_CLAMP_TO_EDGE)) glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), CVOpenGLESTextureGetName(renderTexture!), 0) self.buffer = renderTarget } 

4) Add the drawn frame to the recorded video

  func screenshot(_ frame: CVPixelBuffer) { glClearColor(0, 1, 0, 1) // the output should at least be green // do stuff, draw triangles from the frame etc... Without anything I'm at least expecting the output to be green CVPixelBufferLockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0)) currentTime = CMTimeAdd(currentTime, frameLength) writerAdaptor?.append(buffer, withPresentationTime: currentTime) } 

5) In each buffer coming from the camera, process it and add the result to the video

 extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ captureOutput: AVCaptureOutput?, didOutputSampleBuffer sampleBuffer: CMSampleBuffer?, from connection: AVCaptureConnection?) { guard let sampleBuffer = sampleBuffer, let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } screenshot(frame) } } 

I uninstalled part of the opengl program for simplicity. Even without this, I expect the result to be green as I call glClearColor(0, 1, 0, 1)

0
ios gpuimage opengl-es video-processing
May 2 '17 at 3:11
source share

No one has answered this question yet.

See similar questions:

61
Faster glReadPixels alternative in iPhone OpenGL ES 2.0

or similar:

661
How to create delegates in Objective-C?
526
How can I create a UIColor from a hex string?
383
Rotate a video using FFmpeg
fourteen
Pulling data from a CMSampleBuffer to create a deep copy
four
OpenGL ES for iOS video (texture rendering with iOS 5 texture cache)
2
GPUImage OpenGL ES recording texture scene, video black
0
random failure loading texture from image file
0
Select source rectangle from textured video
0
Texture rendering: always black texture
0
The GLKView.display () method sometimes fails. EXC_BAD_ACCESS



All Articles