IOS: trimming CMSampleBufferRef before adding to AVAssetWriterInput

I am currently experimenting with CoreImage, learning how to apply CIFilters to a camera channel. Currently, I managed to take the camera feed, apply the filter and record the channel in AVAssetWriter as a video, but one problem that I am associated with is that during the filtering process I actually crop the image data so that it always has square dimensions (necessary for other aspects of the project)

My process is as follows:

  • Capturing feed using AVCaptureSession
  • Take CMSampleBufferRef from capture output and acquire CVPixelBufferRef
  • Get the base address of CVPixelBufferRef and create a CGBitmapContext using the base address as its data (so that we can rewrite it)
  • Convert CVPixelBufferRef to CIImage (using one of the CIImage constructors)
  • Apply Filters to CIImage
  • Convert CIImage to CGImageRef
  • Draw a CGImageRef in a CGBitmapContext (as a result, the contents of the sample buffers will be overwritten)
  • Add CMSampleBufferRef to AVAssetWriterInput.

Without drawing a CGImageRef context, this is what I get:

enter image description here

After drawing a CGImageRef in context, this is what I get:

enter image description here

Ideally, I just want to tell CMSampleBufferRef that it has new dimensions, so additional information is omitted. But I'm wondering if I need to create a new CMSampleBufferRef at all.

Any help would be greatly appreciated!

+7
source share

All Articles