AV Foundation: AVCaptureVideoPreviewLayer and frame duration

I use the AV Foundation to process frames from a camcorder (iPhone 4s, iOS 6.1.2). I am setting up AVCaptureSession, AVCaptureDeviceInput, AVCaptureVideoDataOutput in the AV Foundation programming guide. Everything works as expected, and I can get the frames in the captureOutput:didOutputSampleBuffer:fromConnection: .

I also have a preview layer similar to this:

 AVCaptureVideoPreviewLayer *videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession]; [videoPreviewLayer setFrame:self.view.bounds]; videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [self.view.layer insertSublayer:videoPreviewLayer atIndex:0]; 

The fact is that I do not need 30 frames per second in my frame processing, and I can not process them so quickly. Therefore, I use this code to limit the frame duration:

 // videoOutput is AVCaptureVideoDataOutput set earlier AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo]; [conn setVideoMinFrameDuration:CMTimeMake(1, 10)]; [conn setVideoMaxFrameDuration:CMTimeMake(1, 2)]; 

This works great and limits the scope received by the captureOutput delegate.

However, this also limits the frames per second at the preview level, and the preview video becomes very unresponsive.

I understand from the documentation that the frame duration is set independently in the connection, and the preview layer is really different from AVCaptureConnection. Checking the duration of mixing / max on [videoPreviewLayer connection] shows that it is really set to the default values ​​(1/30 and 1/24) and differs from the duration set when connecting AVCaptureVideoDataOutput.

So, is it possible to limit the frame duration only at the output of the frame and still see the frame duration of 1 / 24-1 / 30 in the preliminary video? How?

Thanks.

+4
source share
3 answers

As long as you are right that there are two AVCaptureConnection s, this does not mean that they can independently set the minimum and maximum frame duration. This is because they use the same physical equipment.

If connection # 1 activates the shutter at a speed of (say) five frames / s with a frame duration of 1/5 s, then it is not possible for connection No. 2 to turn on the shutter 30 times / second with a frame duration of 1/30 s.

To get the desired effect you will need two cameras!

The only way to get closer to what you want is to follow the method described by Kelin Kolklasser in the answer of March 22.

However, you have options to be a little more complex in this approach. For example, you can use a counter to decide which frames should be dropped, instead of making the stream sleep. You can make this counter respond to the actual frame rate that passes (which you can get from the metadata that is included in the captureOutput:didOutputSampleBuffer:fromConnection: along with the image data or which you can calculate yourself by manually synchronizing the frames), you can even it’s very wise to simulate a longer exposure by composing shots rather than shooting them β€” just like the many slow shutter apps in the App Store (apart from details such as different shutter artifacts, it's really a big difference) between one frame scanned for 1/5 second and five frames, each time it is scanned after 1/25 s and then glued together).

Yes, this is a little work, but you are trying to make one video camera twice, both in two and in real time, and it will never be easy.

+4
source

Think of it this way: you ask the capture device to limit the frame length so you get better exposure. Good. You want to view a higher frame rate. If you could view at a faster speed, the capture device (camera) would NOT have enough time to set the frame to get the best exposure on the captured frames. This is similar to asking to see different frames in the preview than those that were captured.

I think that if it were possible, it would also be a negative user experience.

+2
source

I had the same issue for my Cocoa application (Mac OS X). Here's how I solved it:

First, be sure to process the captured frames in a separate send queue. Also make sure that all frames that you are not ready to process are discarded; this is the default value, but I still set the flag below to just document that I am dependent on it.

  videoQueue = dispatch_queue_create("com.ohmware.LabCam.videoQueue", DISPATCH_QUEUE_SERIAL); videoOutput = [[AVCaptureVideoDataOutput alloc] init]; [videoOutput setAlwaysDiscardsLateVideoFrames:YES]; [videoOutput setSampleBufferDelegate:self queue:videoQueue]; [session addOutput:videoOutput]; 

Then, when processing frames in a delegate, you can simply disable the stream for a given time interval. Frames that the delegate is not awake are discarded calmly. I implement an additional method for counting discarded frames below, as a health check; my application never registers frames with frame using this technique.

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; { OSAtomicAdd64(1, &videoSampleBufferDropCount); } - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; { int64_t savedSampleBufferDropCount = videoSampleBufferDropCount; if (savedSampleBufferDropCount && OSAtomicCompareAndSwap64(savedSampleBufferDropCount, 0, &videoSampleBufferDropCount)) { NSLog(@"Dropped %lld video sample buffers!!!", savedSampleBufferDropCount); } // NSLog(@"%s", __func__); @autoreleasepool { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CIImage * cameraImage = [CIImage imageWithCVImageBuffer:imageBuffer]; CIImage * faceImage = [self faceImage:cameraImage]; dispatch_sync(dispatch_get_main_queue(), ^ { [_imageView setCIImage:faceImage]; }); } [NSThread sleepForTimeInterval:0.5]; // Only want ~2 frames/sec. } 

Hope this helps.

+1
source

All Articles