Getting a still image from video output on iphone?

I am writing an application to show statistics on lighting conditions, as seen on the iphone camera. I take an image every second and do the calculations on it.

To capture an image, I use the following method:

-(void) captureNow
{
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in captureManager.stillImageOutput.connections)
    {
        for (AVCaptureInputPort *port in [connection inputPorts])
        {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] )
            {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    [captureManager.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
     {   
         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         latestImage = [[UIImage alloc] initWithData:imageData];
     }];
}

However, the method captureStillImageAsynhronously....makes the shutter sound play on the phone, which is not suitable for my application, since it will capture images constantly.

I read that it is impossible to turn off this sound effect. Instead, I want to capture frames from the video input for the phone:

AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];

and hopefully turn them into objects UIImage.

How can i achieve this? I don't know how much AVFoundation works - I downloaded some sample code and modified it for my own purposes.

+2
1

. , , AVCaptureVideoDataOutputSampleBufferDelegate.

, :

// Grab the back-facing camera
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) 
{
    if ([device position] == AVCaptureDevicePositionBack) 
    {
        backFacingCamera = device;
    }
}

// Create the capture session
captureSession = [[AVCaptureSession alloc] init];

// Add the video input  
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
if ([captureSession canAddInput:videoInput]) 
{
    [captureSession addInput:videoInput];
}

// Add the video frame output   
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

if ([captureSession canAddOutput:videoOutput])
{
    [captureSession addOutput:videoOutput];
}
else
{
    NSLog(@"Couldn't add video output");
}

// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if (![captureSession isRunning])
{
    [captureSession startRunning];
};

, :

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
    int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

        // Process pixel buffer bytes here

    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}

BGRA , CVPixelBufferGetBaseAddress(cameraFrame). .

, , CPU, . Accelerate, , , . vDSP_meanv() , . - , YUV BGRA, .

OpenGL ES, , . , , , .

+5

All Articles