I noticed in the Apple code example that they often provide a value of 0 in the bytesPerRow CGBitmapContextCreate parameter. For example, this comes from the Reflection of a sample project.
CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh, 8, 0, colorSpace, kCGImageAlphaNone);
It seemed strange to me, since I always followed the path of multiplying the image width by the number of bytes per pixel. I tried replacing zero with my own code and checking it out. Of course it still works.
size_t bitsPerComponent = 8; size_t bytesPerPixel = 4; size_t bytesPerRow = reflectionWidth * bytesPerPixel; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, reflectionWidth, reflectionHeight, bitsPerComponent, 0,
According to the docs, bytesPerRow should be "The number of bytes of memory for each line of the bitmap."
So what is the deal? When can I put a zero and when should I calculate the exact value? Are there any performance implications for this, anyway?
source share