On the contrary, the presence of a shift between images exceeding one pixel does not interfere with the accuracy of the subpixels, that is, the image can shift 3.3 pixels to the right, etc.
First you need subpixel accuracy to estimate the offset between frames, something in the lines:
cornerSubPix( imgA, cornersA, Size( win_size, win_size ), Size( -1, -1 ), TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) ); cornerSubPix( imgB, cornersB, Size( win_size, win_size ), Size( -1, -1 ), TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) );
[...]
calcOpticalFlowPyrLK( imgA, imgB, cornersA, cornersB, features_found, feature_errors , Size( win_size, win_size ), 5, cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.1 ), 0 );
You are lucky because your scene does not have big changes in lightning (therefore PyrLK will be accurate enough) and its structure will not change much (because it is a short sequence). This means that you can get an oriented motion vector from frame to frame from the central part of the scene (where the car is) by removing outliers and averaging the remaining ones. Please note that this approach will not work if the car is approaching you ...
Thus, the simplest super-resolution algorithm involves mapping each frame with their individual offsets onto a higher-order grid (for example, 2x width and 2x height) and averaging their results. This will deal with the noise and give you a very good impression of how good your assumptions are. You have to work against the model database for this (since you have a sequence database to check, right?). If the approach is satisfactory, you can simply get some sub-algorithms from the literature to remove the point spread function, which is mostly masked > filtering .