I worked with the GrabCut algorithm (as implemented in OpenCV) on the iPhone. The performance is terrible. It takes about 10-15 seconds to run even on a simulator for an image of about 800x800 size. It works on my phone for several minutes, in the end the memory and crash ends (iPhone 4). I am sure that there may be some kind of optimization that I can do if I write my own version of the algorithm in C, but I get the feeling that no optimization will allow me to get it somewhere close to use. I worked out some performance measurements in some academic papers and even saw 30 second battery life on 1.8 GHz multi-core processors.
So, my only hope is the GPU, which I know nothing about. So far, I have conducted basic research on OpenGL ES, but this is a pretty deep topic, and I do not want to spend hours or days studying basic concepts so that I can find out if I am turned on or not. the right way.
So my question is twofold:
1) Can something like GrabCut run on the GPU? If so, I would like to have a starting point other than "learn OpenGL ES." Ideally, I would like to know which concepts I need to pay particular attention to. Keep in mind that I have no experience with OpenGL and very little experience with image processing.
2) Even if this type of algorithm can be run on the GPU, what performance improvement should I expect? Given that the current runtime is about 30 seconds AT BEST on the processor, it seems unlikely that the GPU will put a large enough dent at runtime to make the algorithm useful.
EDIT: in order for the algorithm to be โuseful,โ I think it had to work in 10 seconds or less.
Thanks in advance.
source share