I have a script that uses the Google Maps API to load a sequence of images with a square satellite of equal size and creates a PDF file. Images need to be rotated in advance, and I'm already doing this with PIL.
I noticed that due to different lighting conditions and terrain, some images are too bright, others are too dark, and the resulting pdf ends up a little ugly, with less than ideal reading conditions "in the field" (which is a backcountry mountain bike where I want to have printed sketch of specific intersections).
(EDIT) The goal is to make all images equally pronounced and contrasting. Thus, images that are too bright should be darkened, and dark ones should be illuminated. (by the way, I once used imagemagick autocontrast , or auto-gamma , or equalize , or autolevel , or something like that, with interesting results in medical images, but I don’t know how to make any of them in PIL).
I already used some image adjustments after converting to grayscale (with a grayscale printer), but the results were not good either. Here is my grayscale code:
#!/usr/bin/python def myEqualize(im) im=im.convert('L') contr = ImageEnhance.Contrast(im) im = contr.enhance(0.3) bright = ImageEnhance.Brightness(im) im = bright.enhance(2) #im.show() return im
This code works independently for each image. I wonder if it would be better to first analyze all the images, and then "normalize" their visual properties (contrast, brightness, gamma, etc.).
In addition, I think that it would be necessary to perform some analysis on the image (histogram?) In order to apply custom correction depending on each image, and not equal correction for all of them (although any “gain” function implicitly considers the initial conventions )
Has anyone had such a problem and / or had a good alternative to do this with color images (no shades of gray)?
Any help would be appreciated, thanks for reading!
python image-processing python-imaging-library contrast brightness
heltonbiker
source share