The discussion you refer to in the C4 documentation relates to a process that uses a filter to calculate matrix multiplication. This is actually just a description of what the filter does with the colors in the image when it is applied.
In fact, what happens under the hood is that the colorMatrix:
method installs a CIFilter
called CIColorMatrix
and applies it to C4Image
. Unfortunately, the source code for the CIColorMatrix
filter CIColorMatrix
not provided by Apple.
So, the long-awaited answer to your question:
You cannot access the color components for pixels in C4Image
through the CIColorMatrix
filter. But the C4Image
class has a CGImage
property (for example, yourC4Image.CGImage
), which you can use to get the data in pixels.
A good, simple technique can be found HERE
EDIT: I was obsessed with this question yesterday and added these two methods to the C4Image class:
Way to load pixel data:
-(void)loadPixelData { NSUInteger width = CGImageGetWidth(self.CGImage); NSUInteger height = CGImageGetHeight(self.CGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); bytesPerPixel = 4; bytesPerRow = bytesPerPixel * width; rawData = malloc(height * bytesPerRow); NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), self.CGImage); CGContextRelease(context); }
And the pixel color access method:
-(UIColor *)colorAt:(CGPoint)point { if(rawData == nil) { [self loadPixelData]; } NSUInteger byteIndex = bytesPerPixel * point.x + bytesPerRow * point.y; CGFloat r, g, b, a; r = rawData[byteIndex]; g = rawData[byteIndex + 1]; b = rawData[byteIndex + 2]; a = rawData[byteIndex + 3]; return [UIColor colorWithRed:RGBToFloat(r) green:RGBToFloat(g) blue:RGBToFloat(b) alpha:RGBToFloat(a)]; }
How I would apply the methods from another post that I mentioned.