I am trying to use Brad Larson answer to efficiently process video on iOS. This answer is about how to efficiently get the pixel output buffer without using glReadPixels . From what I understood, you need to load the pixel buffer from AVAssetWriterInputPixelBufferAdaptor pixelBufferPool, link it, and then after each rendering cycle just call
CVPixelBufferUnlockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0)) writerAdaptor?.append(buffer, withPresentationTime: currentTime)
However, when I try to do this, the output video is black. The original answer shows code snippets, not a complete setup. I also looked at GPUImage, but surprisingly uses glReadPixels : https://github.com/BradLarson/GPUImage/blob/167b0389bc6e9dc4bb0121550f91d8d5d6412c53/framework/Source/Mac/GPUImageMovie0101
Here is a slightly simplified version of the code I'm trying to find:
1) Start recording a camera
override func viewDidLoad() { // Start the camera recording session = AVCaptureSession() session.sessionPreset = AVCaptureSessionPreset1920x1080 // Input setup. device = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo).first as? AVCaptureDevice input = try? AVCaptureDeviceInput(device: device) session.addInput(input) // Output setup. let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true output.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange ] session.addOutput(output) output.setSampleBufferDelegate(self, queue: .main) setUpWriter() }
2) Launch a video recorder
func setUpWriter() { // writer: AVAssetWriter // input: AVAssetWriterInput let attributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String: 400, kCVPixelBufferHeightKey as String: 720, AVVideoScalingModeKey as String: AVVideoScalingModeFit, kCVPixelFormatOpenGLESCompatibility as String: true, kCVPixelBufferIOSurfacePropertiesKey as String: [:], ] let adaptor = AVAssetWriterInputPixelBufferAdaptor( assetWriterInput: input, sourcePixelBufferAttributes: attributes) setUpTextureCache(in: adaptor.bufferPool!) writer.add(input) writer.startWriting() writer.startSession(atSourceTime: currentTime) }
3) configure the cache, as in https://stackoverflow.com/a/3/29/29/29
func setUpTextureCache(in pool: CVPixelBufferPool) { var renderTarget: CVPixelBuffer? = nil var renderTexture: CVOpenGLESTexture? = nil var coreVideoTextureCache: CVOpenGLESTextureCache? = nil var err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, nil, context, nil, &coreVideoTextureCache) if err != kCVReturnSuccess { print("Error at CVOpenGLESTextureCacheCreate \(err)") } err = CVPixelBufferPoolCreatePixelBuffer( nil, bufferPool, &renderTarget ) if err != kCVReturnSuccess { print("Error at CVPixelBufferPoolCreatePixelBuffer \(err)") } err = CVOpenGLESTextureCacheCreateTextureFromImage( kCFAllocatorDefault, coreVideoTextureCache!, renderTarget!, nil, GLenum(GL_TEXTURE_2D), GL_RGBA, GLsizei(400), GLsizei(720), GLenum(GL_BGRA), GLenum(GL_UNSIGNED_BYTE), 0, &renderTexture ) if err != kCVReturnSuccess { print("Error at CVOpenGLESTextureCacheCreateTextureFromImage \(err)") } glBindTexture(CVOpenGLESTextureGetTarget(renderTexture!), CVOpenGLESTextureGetName(renderTexture!)) glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GLfloat(GL_CLAMP_TO_EDGE)) glTexParameterf(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GLfloat(GL_CLAMP_TO_EDGE)) glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), CVOpenGLESTextureGetName(renderTexture!), 0) self.buffer = renderTarget }
4) Add the drawn frame to the recorded video
func screenshot(_ frame: CVPixelBuffer) { glClearColor(0, 1, 0, 1)
5) In each buffer coming from the camera, process it and add the result to the video
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ captureOutput: AVCaptureOutput?, didOutputSampleBuffer sampleBuffer: CMSampleBuffer?, from connection: AVCaptureConnection?) { guard let sampleBuffer = sampleBuffer, let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } screenshot(frame) } }
I uninstalled part of the opengl program for simplicity. Even without this, I expect the result to be green as I call glClearColor(0, 1, 0, 1)