Reducing color depth with opencv and LUT

I want to perform color reduction by scaling the color depth.

Like this example: enter image description here

the first image is CGA resolution, the second is EGA, and the third is HAM. I would like to do this with cv :: LUT, because I think it is better. I can do grayscale with this code:

Mat img = imread("test1.jpg", 0); uchar* p; Mat lookUpTable(1, 256, CV_8U); p = lookUpTable.data; for( int i = 0; i < 256; ++i) p[i] = 16 * (i/16) LUT(img, lookUpTable, reduced); 

original: enter image description here

color reduced: enter image description here

but if I try to do this with color, I get a strange result.

enter image description here

using this code:

 imgColor = imread("test1.jpg"); Mat reducedColor; int n = 16; for (int i=0; i<256; i++) { uchar value = floor(i/n) * n; cout << (int)value << endl; lut.at<Vec3b>(i)[2]= (value >> 16) & 0xff; lut.at<Vec3b>(i)[1]= (value >> 8) & 0xff; lut.at<Vec3b>(i)[0]= value & 0xff; } LUT(imgColor, lut, reducedColor); 
+6
source share
2 answers

You have probably switched, but the root of the problem is that you are doing a 16-bit shift by uchar value , which is only 8 bits. Even the 8-bit offset in this case is too large, since you will delete all bits in uchar . Then there is the fact that the cv::LUT documentation explicitly states that src should be an "input array of 8-bit elements", which is clearly not the case in your code. The end result is that only the first channel of the color image (blue channel) is converted to cv::LUT .

The best way to get around these limitations is to split the color images into channels, convert each channel separately, and then combine the converted channels into a new color image. See code below:

 /* Calculates a table of 256 assignments with the given number of distinct values. Values are taken at equal intervals from the ranges [0, 128) and [128, 256), such that both 0 and 255 are always included in the range. */ cv::Mat lookupTable(int levels) { int factor = 256 / levels; cv::Mat table(1, 256, CV_8U); uchar *p = table.data; for(int i = 0; i < 128; ++i) { p[i] = factor * (i / factor); } for(int i = 128; i < 256; ++i) { p[i] = factor * (1 + (i / factor)) - 1; } return table; } /* Truncates channel levels in the given image to the given number of equally-spaced values. Arguments: image Input multi-channel image. The specific color space is not important, as long as all channels are encoded from 0 to 255. levels The number of distinct values for the channels of the output image. Output values are drawn from the range [0, 255] from the extremes inwards, resulting in a nearly equally-spaced scale where the smallest and largest values are always 0 and 255. Returns: Multi-channel images with values truncated to the specified number of distinct levels. */ cv::Mat colorReduce(const cv::Mat &image, int levels) { cv::Mat table = lookupTable(levels); std::vector<cv::Mat> c; cv::split(image, c); for (std::vector<cv::Mat>::iterator i = c.begin(), n = c.end(); i != n; ++i) { cv::Mat &channel = *i; cv::LUT(channel.clone(), table, channel); } cv::Mat reduced; cv::merge(c, reduced); return reduced; } 
+3
source

Both i and n are integers, so i/n is an integer. Perhaps you want it to be converted to double ( (double)i/n ) before taking a word and multiplying by n ?

0
source

All Articles