Filter fluctuating lighting with OpenCV

I need to make fairly sensitive color (brightness) measurements in a webcam frame using OpenCV. The problem I am experiencing is that the ambient light fluctuates, making it difficult to get accurate results. I am looking for a way to constantly update sequential frames of a video to smooth out global lighting differences. The light changes I'm trying to filter out are happening all over the world or in the whole image. I tried to calculate the difference and subtract this, but with little luck. Does anyone have any tips on how to approach this issue?

EDIT: The 2 images below are from the same video, with color changes slightly increased. If you alternate between them, you will see that there are small changes in lighting, probably due to the fact that the clouds are moving outward. The problem is that these changes overshadow any other color changes that I can detect.

Therefore, I would like to filter out these specific changes. Given that I need part of the frames that I captured, I decided that it would be necessary to filter out lighting changes as they occur in the rest of the frames. Out of my area of ​​interest.

I tried to capture the dominant frequencies in the changes with dft to just ignore the changes in lighting. But I'm not familiar enough with using this feature. I have been using opencv for a week, so I am still involved.

enter image description here enter image description here

+6
source share
3 answers

Short answer: temporary low-pass filter in general lighting

Consider the concept of lighting as a temporal sequence of values ​​representing something like a luminous flux incident on a photographed scene. Your ideal situation is that this function is constant, but the second best situation is that it changes as slowly as possible. The low-pass filter changes a function that can change quickly to one that changes more slowly. Thus, the main stages: (1) Calculate the overall lighting function (2) Calculate a new lighting function using a low-pass filter (3) Normalize the original sequence of images to new illumination values.

(1) The easiest way to calculate the lighting function is to add all the brightness values ​​for each pixel in the image. In simple cases, this may even work; you can guess from my tone that there are a number of caveats.

An important issue is that you would prefer to add illumination values ​​not to some color space (such as HSV), but rather to some physical measure of illumination. Returning from color space to actual indoor light requires data that does not have an image, such as the spectral reflectance of each image surface, so this is unlikely. As a proxy for this, you can use only part of the image that has a consistent reflectance. In the sample image, you can use the table surface at the top of the image. Select a geometric area and calculate from it the total number of lighting.

In this regard, if you have areas of the image in which the camera is saturated, you have lost a lot of information, and the overall value of the illumination will not correspond well to physical lighting. Just cut out any such areas (but do it sequentially in all frames).

(2) Calculate the low-pass filter for the lighting function. These transformations are a fundamental part of every signal processing package. I don’t know enough about OpenCV to find out if it has the corresponding function, so you might need another library. There are many different low-pass filters, but they should all give you similar results.

(3) After you get the low frequency time series, you want to use it as a normalization function for general lighting. Calculate the average value of the low-frequency series and divide by it, obtaining a time series with the average value of 1. Now transform each image by multiplying the lighting in the image by the normalization coefficient. All warnings about perfect performance in a physical lighting space, not a color space, apply.

+5
source

If the signal change is global, you should try to calculate the average value of m (i, t) for each row i in each image at time t in your video. Without the fluctuation ratio of light, m ​​(i, t) / m (i, t + 1) should be equal to 1 for all time. If there is a global change, then for each i, m (i, t) / m (i, t + 1) should be constant. it is better to use the average value of m (i, t) / m (i, t + 1) (for all i). This average value can be used to correct your image at time t.

You can work with a relation of type m (i, 0) / m (i, t) at time 0, then the link Instead of a row, you can use the rectangle of a column or disk ...

+2
source

I think you can apply homomorphic filtering to each of the frames to calculate the reflection component of the frame. Then you can track different reflectivity at selected points.

In accordance with the model of forming the illumination of the image, the pixel value at this position is the product of lighting and reflection: f(x,y) = i(x,y) . r(x,y) f(x,y) = i(x,y) . r(x,y) . Lighting i tends to change slowly in the image (or, in your case, frame) , and the reflection coefficient r tends to change quickly.

Using homomorphic filtering, you can filter out the lighting component. It takes the logarithm of the above equation, so the lighting and reflection components ln become additive: ln(f(x,y)) = ln(i(x,y)) + ln(r(x,y)) . Now you apply a high-pass filter to preserve the reflection component (so that the slowly changing lighting component is filtered out). Look here and here for a detailed explanation of the process with examples.

After applying the filter, you will receive estimated reflection frames r^(x,y,t) .

+1
source

All Articles