I am implementing an algorithm that requires video frames from time t, and the other from time t + 1. Seeing a few examples, this seemed very simple. I thought this would work just fine:
VideoCapture cap(0); Mat img1, img2; while(1) { cap >> img1; // instant t cap >> img2; // instant t+1 imshow(img1 == img2); }
But this is not so, the images were the same because the displayed image (img1 == img2) was completely white, indicating a value of 255 for each pixel.
I thought that perhaps I did not give enough time for the camera to capture the second frame, and I used the same one as in the buffer. What I did was simple:
VideoCapture cap(0); Mat img1, img2; while(1) { cap >> img1; // instant t // I added 2.5 seconds between the acquisition of each frame waitKey(2500); cap >> img2; // instant t+1 waitKey(2500); imshow(img1 == img2); }
It still didn't work. To be sure, I added the following lines of code:
VideoCapture cap(0); Mat img1, img2; while(1) { cap >> img1; // instant t imshow("img1", img1); waitKey(2500); // I added 2.5 seconds between the acquisition of each frame cap >> img2; // instant t+1 // Here I display both images after img2 is captured imshow("img2", img2); imshow("img1", img1); waitKey(2500); imshow(img1 == img2); }
When I again displayed two images after capturing img1, both images changed! I tried to use different VideoCapture objects for different images, but this had no effect ...
Can anyone advise me what I'm doing wrong?
Thanks,
Renan
source share