If you are running iOS 5.0, you can use vImage within Accelerate to create a NEON-optimized subdirectory of the color component using the following code (Appleβs WebCore source code ):
vImage_Buffer src; src.height = height; src.width = width; src.rowBytes = srcBytesPerRow; src.data = srcRows; vImage_Buffer dest; dest.height = height; dest.width = width; dest.rowBytes = destBytesPerRow; dest.data = destRows; // Swap pixel channels from BGRA to RGBA. const uint8_t map[4] = { 2, 1, 0, 3 }; vImagePermuteChannels_ARGB8888(&src, &dest, map, kvImageNoFlags);
where width
, height
and srcBytesPerRow
obtained from your pixel buffer through CVPixelBufferGetWidth()
, CVPixelBufferGetHeight()
and CVPixelBufferGetBytesPerRow()
. srcRows
will be a pointer to the base address of the bytes in the pixel buffer, and destRows
will be the memory that you allocated to store the RGBA output image.
This should be much faster than just repeating bytes and replacing color components.
Depending on the size of the image, an even faster solution would be to load the frame into OpenGL ES, display a simple rectangle with this as a texture, and use glReadPixels () to output the RGBA values. Even better would be to use iOS 5.0 texture caching for upload and download, where this process only takes 1-3 ms for a 720p frame on iPhone 4. Of course, using OpenGL ES means much more supportive code to pull this out.
source share