I use the AV Foundation to process frames from a camcorder (iPhone 4s, iOS 6.1.2). I am setting up AVCaptureSession, AVCaptureDeviceInput, AVCaptureVideoDataOutput in the AV Foundation programming guide. Everything works as expected, and I can get the frames in the captureOutput:didOutputSampleBuffer:fromConnection: .
I also have a preview layer similar to this:
AVCaptureVideoPreviewLayer *videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession]; [videoPreviewLayer setFrame:self.view.bounds]; videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [self.view.layer insertSublayer:videoPreviewLayer atIndex:0];
The fact is that I do not need 30 frames per second in my frame processing, and I can not process them so quickly. Therefore, I use this code to limit the frame duration:
// videoOutput is AVCaptureVideoDataOutput set earlier AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo]; [conn setVideoMinFrameDuration:CMTimeMake(1, 10)]; [conn setVideoMaxFrameDuration:CMTimeMake(1, 2)];
This works great and limits the scope received by the captureOutput delegate.
However, this also limits the frames per second at the preview level, and the preview video becomes very unresponsive.
I understand from the documentation that the frame duration is set independently in the connection, and the preview layer is really different from AVCaptureConnection. Checking the duration of mixing / max on [videoPreviewLayer connection] shows that it is really set to the default values ββ(1/30 and 1/24) and differs from the duration set when connecting AVCaptureVideoDataOutput.
So, is it possible to limit the frame duration only at the output of the frame and still see the frame duration of 1 / 24-1 / 30 in the preliminary video? How?
Thanks.