Getting black and white UIImage (not in grayscale)

I need to get a pure black and white UIImage from another UIImage (not grayscale). Can anybody help me?

Thank you for reading.

Edition:

Here is a suggested solution. Thanks to everyone. Almost I know that this is not the best way to do this, it works great.

// Gets an pure black and white image from an original image. - (UIImage *)pureBlackAndWhiteImage:(UIImage *)image { unsigned char *dataBitmap = [self bitmapFromImage:image]; for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) { if ((dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 3 / 2)) { dataBitmap[i + 1] = 0; dataBitmap[i + 2] = 0; dataBitmap[i + 3] = 0; } else { dataBitmap[i + 1] = 255; dataBitmap[i + 2] = 255; dataBitmap[i + 3] = 255; } } image = [self imageWithBits:dataBitmap withSize:image.size]; return image; } 

EDITED 1:

In response to the comments, Here are the bitmapFromImage and imageWithBits .

 // Retrieves the bits from the context once the image has been drawn. - (unsigned char *)bitmapFromImage:(UIImage *)image { // Creates a bitmap from the given image. CGContextRef contex = CreateARGBBitmapContext(image.size); if (contex == NULL) { return NULL; } CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height); CGContextDrawImage(contex, rect, image.CGImage); unsigned char *data = CGBitmapContextGetData(contex); CGContextRelease(contex); return data; } // Fills an image with bits. - (UIImage *)imageWithBits:(unsigned char *)bits withSize:(CGSize)size { // Creates a color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); if (colorSpace == NULL) { fprintf(stderr, "Error allocating color space\n"); free(bits); return nil; } CGContextRef context = CGBitmapContextCreate (bits, size.width, size.height, 8, size.width * 4, colorSpace, kCGImageAlphaPremultipliedFirst); if (context == NULL) { fprintf (stderr, "Error. Context not created\n"); free (bits); CGColorSpaceRelease(colorSpace ); return nil; } CGColorSpaceRelease(colorSpace ); CGImageRef ref = CGBitmapContextCreateImage(context); free(CGBitmapContextGetData(context)); CGContextRelease(context); UIImage *img = [UIImage imageWithCGImage:ref]; CFRelease(ref); return img; } 
+4
source share
6 answers

If you are looking for a threshold image - everything brighter than a certain value becomes white, everything darkens black, and you select a value - then a library like GPU Image will work for you.

+8
source

This code may help:

 for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) { if (dataBitmap[i + 0] >= dataBitmap[i + 1] && dataBitmap[i + 0] >= dataBitmap[i + 2]){ dataBitmap[i + 1] = dataBitmap[i + 0]; dataBitmap[i + 2] = dataBitmap[i + 0]; } else if (dataBitmap[i + 1] >= dataBitmap[i + 0] && dataBitmap[i + 1] >= dataBitmap[i + 2]) { dataBitmap[i + 0] = dataBitmap[i + 1]; dataBitmap[i + 2] = dataBitmap[i + 1]; } else { dataBitmap[i + 0] = dataBitmap[i + 2]; dataBitmap[i + 1] = dataBitmap[i + 2]; } } 
+6
source

Although this may be redundant for your purposes, I only do this for live video from the iPhone camera in my sample application here . This application accepts color and sensitivity and can turn all white pixels that are within this threshold and transparent if not. I use OpenGL ES 2.0 programmable shaders to do this to get real-time responsiveness. All of this is described in this post here .

Again, this is probably too large for what you want. In the case of a simple UIImage that you want to convert to black and white, you can probably read in raw pixels, scroll through them and apply the same threshold value that I did to output the final image. It will not be as fast as the shader approach, but it will be much easier for the code.

+2
source

The code worked for me, I just need to tweak it a bit ... here are a few changes that I made to work correctly by assigning the value of dataBitmap[] array zero index ...

  for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) { //here an index for zeroth element is assigned if ((dataBitmap[i + 0]+dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 4 / 2)) { // multiply four,instead of three dataBitmap[i + 0] = 0; dataBitmap[i + 1] = 0; dataBitmap[i + 2] = 0; dataBitmap[i + 3] = 0; } else { dataBitmap[i + 0] = 255; dataBitmap[i + 1] = 255; dataBitmap[i + 2] = 255; dataBitmap[i + 3] = 255; } } 

Hope this works.

+1
source

Here's a quick fix 3:

 class func pureBlackAndWhiteImage(_ inputImage: UIImage) -> UIImage? { guard let inputCGImage = inputImage.cgImage, let context = getImageContext(for: inputCGImage), let data = context.data else { return nil } let white = RGBA32(red: 255, green: 255, blue: 255, alpha: 255) let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255) let width = Int(inputCGImage.width) let height = Int(inputCGImage.height) let pixelBuffer = data.bindMemory(to: RGBA32.self, capacity: width * height) for x in 0 ..< height { for y in 0 ..< width { let offset = x * width + y if pixelBuffer[offset].red > 0 || pixelBuffer[offset].green > 0 || pixelBuffer[offset].blue > 0 { pixelBuffer[offset] = black } else { pixelBuffer[offset] = white } } } let outputCGImage = context.makeImage() let outputImage = UIImage(cgImage: outputCGImage!, scale: inputImage.scale, orientation: inputImage.imageOrientation) return outputImage } class func getImageContext(for inputCGImage: CGImage) ->CGContext? { let colorSpace = CGColorSpaceCreateDeviceRGB() let width = inputCGImage.width let height = inputCGImage.height let bytesPerPixel = 4 let bitsPerComponent = 8 let bytesPerRow = bytesPerPixel * width let bitmapInfo = RGBA32.bitmapInfo guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else { print("unable to create context") return nil } context.setBlendMode(.copy) context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height))) return context } struct RGBA32: Equatable { var color: UInt32 var red: UInt8 { return UInt8((color >> 24) & 255) } var green: UInt8 { return UInt8((color >> 16) & 255) } var blue: UInt8 { return UInt8((color >> 8) & 255) } var alpha: UInt8 { return UInt8((color >> 0) & 255) } init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) { color = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0) } static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue } func ==(lhs: RGBA32, rhs: RGBA32) -> Bool { return lhs.color == rhs.color } 
0
source

With Swift 3, I was able to accomplish this effect using CIFilters, first applying CIPhotoEffectNoir (to make it grayscale), and then applying the CIColorControl filter with the input parameter kCIInputContrastKey set to a high value (i.e. 50). Setting kCIInputBrightnessKey will also allow you to adjust the intensity of black and white contrast, negative for a darker image and positive for a brighter image. For instance:

 extension UIImage { func toBlackAndWhite() -> UIImage? { guard let ciImage = CIImage(image: self) else { return nil } guard let grayImage = CIFilter(name: "CIPhotoEffectNoir", withInputParameters: [kCIInputImageKey: ciImage])?.outputImage else { return nil } let bAndWParams: [String: Any] = [kCIInputImageKey: grayImage, kCIInputContrastKey: 50.0, kCIInputBrightnessKey: 10.0] guard let bAndWImage = CIFilter(name: "CIColorControls", withInputParameters: bAndWParams)?.outputImage else { return nil } guard let cgImage = CIContext(options: nil).createCGImage(bAndWImage, from: bAndWImage.extent) else { return nil } return UIImage(cgImage: cgImage) } } 
0
source

All Articles