I need to create 12 color images from a single grayscale image (red orange yellow, etc.)
The original image is actually PNG, RGBA:

I use the library I found ( https://github.com/PaulSolt/UIImage-Conversion/ ) to split the UIImage into an RGBA byte array, which then processes the pixel by pixel and use the same library to create a new UIImage.
This is my code:
- (UIImage*) cloneWithTint3: (CGColorRef) cgTint { enum { R, G, B, A }; // - - - assert( CGColorGetNumberOfComponents( cgTint ) == 4 ); const float* tint = CGColorGetComponents( cgTint ); // - - - int w = self.size.width; int h = self.size.height; int pixelCount = w * h; uint8_t* data = [UIImage convertUIImageToBitmapRGBA8: self]; for( int i=0; i < pixelCount ; i++ ) { int offset = i << 2; // same as 4 * i but faster float in_r = data[ offset + R ]; float in_g = data[ offset + G ]; float in_b = data[ offset + B ]; // float in_a = data[ offset + A ]; if( i == 0 ) printf( "ALPHA %d ", data[ offset + A ] ); // corner pixel has alpha 0 float greyscale = 0.30 * in_r + 0.59 * in_g + 0.11 * in_b; data[ offset + R ] = (uint8_t) ( greyscale * tint[ R ] ); data[ offset + G ] = (uint8_t) ( greyscale * tint[ G ] ); data[ offset + B ] = (uint8_t) ( greyscale * tint[ B ] ); } UIImage* imageNew = [UIImage convertBitmapRGBA8ToUIImage: data withWidth: w withHeight: h ]; free( data ); // test corner pixel { uint8_t* test = [UIImage convertUIImageToBitmapRGBA8: self]; for( int i=0; i < 1 ; i++ ) { int offset = i << 2; // float in_r = test[ offset + R ]; // float in_g = test[ offset + G ]; // float in_b = test[ offset + B ]; // float in_a = data[ offset + A ]; printf( "ALPHA %d ", test[ offset + A ] ); // corner pixel still has alpha 0 } free( test ); } return imageNew; }
The problem is that I do not get the alpha channel back - it still creates opacity.
If I just pass the original image back in the first line, it will display correctly, so the problem is not in the original image.
If I check the alpha component of the first pixel, I believe that it is 0 (i.e. transparent) at all points in the process, as it should be. I even put an additional test, as you can see - as soon as I have my last UIImage, I again split it into an RGBA bitmap and check the same element. It is still 0.
So it looks like there is an error in the convertUIImageToBitmapRGBA8 method. it looks like there should be some kind of tweak on the resulting UIImage that makes him think that it is opaque everywhere.
But looking at the source code of this method ( https://github.com/PaulSolt/UIImage-Conversion/blob/master/ImageHelper.m - the whole library is just this file with its header) I can not see where the problem may be.
This is the result of rendering:

As you can see, C was rendered before G to F, so it is cropped on both sides by rectangles.