I am working on a project where I need to implement collision avoidance using OpenCV. This should be done on iOS (iOS 5 and above will do).
Project goal: The idea is to install the iPad on the car dashboard and launch the application. The application must capture frames from the camera and process them to determine if the car will collide with any obstacle.
I am new to any image processing, so I focus on the conceptual levels in this project.
What i have done so far:
- I looked at OpenCV and read about it online. Collision avoidance is implemented using the Lucas-Canada Pyramid method. Is it correct?
Using this project as a starting point: http://aptogo.co.uk/2011/09/opencv-framework-for-ios/ It works successfully on my iPad and the capture function works, and also means that the camera capture is well integrated . I changed the implementation of processFrame to try Optical Flow instead of detecting Canny edge. Here is the function (still incomplete).
-(void)processFrame { int currSliderVal = self.lowSlider.value; if(_prevSliderVal == currSliderVal) return; cv::Mat grayFramePrev, grayFrameLast, prevCorners, lastCorners, status, err; // Convert captured frame to grayscale for _prevFrame cv::cvtColor(_prevFrame, grayFramePrev, cv::COLOR_RGB2GRAY); cv::goodFeaturesToTrack(grayFramePrev, prevCorners, 500, 0.01, 10); // Convert captured frame to grayscale for _lastFrame cv::cvtColor(_lastFrame, grayFrameLast, cv::COLOR_RGB2GRAY); cv::goodFeaturesToTrack(grayFrameLast, lastCorners, 500, 0.01, 10); cv::calcOpticalFlowPyrLK(_prevFrame, _lastFrame, prevCorners, lastCorners, status, err); self.imageView.image = [UIImage imageWithCVMat:lastCorners]; _prevSliderVal = self.lowSlider.value; }
- Read about the optical stream and how it is used (conceptually) to detect an impending collision. Summary. If the object grows in size, but moves to any edge of the frame, then this is not a collision path. If the object grows in size, but does not move to any edge, then it is in the path of collision. Is it correct?
- This project (http://se.cs.ait.ac.th/cvwiki/opencv:tutorial:optical_flow) seems to be doing exactly what I want to achieve. But I did not understand how this is done by reading the code. I can not start it because I do not have a Linux window. I read the explanation on this webpage, it looks like he got a homograph matrix. How is this result used in collision avoidance?
In addition to the above four points, I read a lot more about this topic, but I still cannot collect all the parts.
Here are my questions (please remember that I'm new)
HOW is optical flow used to detect an impending collision? By this I mean, believing that I can get the correct result from the cv :: calcOpticalFlowPyrLK () function, how can I go from it to find an impending collision with any object in the frame? Is it possible to measure the distance from the object that we are most likely to encounter?
Is there an example of a working project that implements this or any similar functionality that I can look at. I looked at the project on eosgarden.com, but the functionality did not seem to be implemented in it.
In the above code example, I convert lastCorners to UIImage and I display this image on the screen. This shows me an image that has only colored horizontal lines on the screen, nothing like the original test image. Is this the correct conclusion for this function?
I find it difficult to understand the data types used in this project. InputArray, OutputArray, etc. are types accepted by the OpenCV API. However, the processFrame cv :: Mat function is passed to the Canny edge detection method. I pass cv :: Mat for calcOpticalFlowPyrLK () for prevImage and nextImage?
Thank you in advance:)
Update:. Found this sample project (http://www.hatzlaha.co.il/150842/Lucas-Kanade-Detection-for-the-iPhone). It does not compile on my mac, but I think from this I will have working code for the optical stream. But still I cannot understand how I can detect the tracking of these points that prevented the collision. If any of you can even answer Qts questions. No. 1, it will be very useful.
Update It seems that the optical stream is used to calculate the FoE (Focus of Expansion). There may be several FoE candidates. And using FoE, TTC (Time To Collision). In the second part, I do not quite understand. But, am I still right? Does OpenCV support FoE and / or TTC?
source share