Local license plate extension in a video sequence

My goal is to create an improved image with a more readable license plate number from a specific sequence of images with indistinguishable license plates for driving cars, such as the sequence below.

enter image description here

As you can see, the plate number is, for the most part, indistinguishable. I am exploring implementations to improve such as multi-frame super resolution (as I explored in this article: http://users.soe.ucsc.edu/~milanfar/publications/journal/SRfinal.pdf ). I have some experience with OpenCV and am looking for help in which direction, or if super-resolution is indeed a viable option for this kind of problem.

+4
source share
1 answer

On the contrary, the presence of a shift between images exceeding one pixel does not interfere with the accuracy of the subpixels, that is, the image can shift 3.3 pixels to the right, etc.

First you need subpixel accuracy to estimate the offset between frames, something in the lines:

cornerSubPix( imgA, cornersA, Size( win_size, win_size ), Size( -1, -1 ), TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) ); cornerSubPix( imgB, cornersB, Size( win_size, win_size ), Size( -1, -1 ), TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) ); 

[...]

 calcOpticalFlowPyrLK( imgA, imgB, cornersA, cornersB, features_found, feature_errors , Size( win_size, win_size ), 5, cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.1 ), 0 ); 

You are lucky because your scene does not have big changes in lightning (therefore PyrLK will be accurate enough) and its structure will not change much (because it is a short sequence). This means that you can get an oriented motion vector from frame to frame from the central part of the scene (where the car is) by removing outliers and averaging the remaining ones. Please note that this approach will not work if the car is approaching you ...

Thus, the simplest super-resolution algorithm involves mapping each frame with their individual offsets onto a higher-order grid (for example, 2x width and 2x height) and averaging their results. This will deal with the noise and give you a very good impression of how good your assumptions are. You have to work against the model database for this (since you have a sequence database to check, right?). If the approach is satisfactory, you can simply get some sub-algorithms from the literature to remove the point spread function, which is mostly masked > filtering .

+2
source

All Articles