In this CGBitmapContextCreate, why is bytesPerRow 0?

I noticed in the Apple code example that they often provide a value of 0 in the bytesPerRow CGBitmapContextCreate parameter. For example, this comes from the Reflection of a sample project.

CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh, 8, 0, colorSpace, kCGImageAlphaNone); 

It seemed strange to me, since I always followed the path of multiplying the image width by the number of bytes per pixel. I tried replacing zero with my own code and checking it out. Of course it still works.

 size_t bitsPerComponent = 8; size_t bytesPerPixel = 4; size_t bytesPerRow = reflectionWidth * bytesPerPixel; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, reflectionWidth, reflectionHeight, bitsPerComponent, 0, // bytesPerRow ?? colorSpace, kCGImageAlphaPremultipliedLast); 

According to the docs, bytesPerRow should be "The number of bytes of memory for each line of the bitmap."

So what is the deal? When can I put a zero and when should I calculate the exact value? Are there any performance implications for this, anyway?

+4
source share
1 answer

My understanding is that if you pass to zero, it calculates bytes per line based on the bitsPerComponent and width arguments. You may need an extra padding at the end of each byte string (if your device required it or some other restriction). In this case, you can pass a value that is more than just the width * (bytes per pixel). I would suggest that this is probably never required in modern i / MacOS development, with the exception of some weird optimizations in a pinch.

+7
source

All Articles