GPU programming for image processing

I am working on a project to control a robot-humanoid-robot. Unfortunately, we have a very limited set of hardware resources ( RB110 board and its PCI mini-card) . I plan to transfer image processing tasks from the processor to the graphics card processor, but I have never done it before ... I recommend using OpenCV, but it seems impossible because our graphics card processor (Volari Z9s) is not supported by the chassis. Then I found an interesting post in the Linux Journal. The author used OpenGL to handle frames extracted from a v4l device.

I am a bit confused about the connection between the hardware API and OpenGL / OpenCV. To use a graphics processor, do I need to equip it to support graphic software platforms (OpenGL / OpenCV)? Where can I find such an API?

I figured out a lot of my equipment, unfortunately, the supplier ( XGI Technology ) seems to have died out somehow ...

+4
source share
2 answers

To use a graphics processor, is it necessary to equip it to support graphic software platforms (OpenGL / OpenCV)? Where can I find such an API?

OpenCL and OpenGL are both translated into hardware instructions using the GPU driver, so you need a driver for your operating system that supports these frameworks. Most GPU drivers support some version of OpenGL, so they should work.

The OpenGL standard is supported by the Khronos Group , and you can find some nehe guides .

How OpenGL Works

OpenGL accepts triangles as input and draws them according to the state it has when a draw is issued. Most OpenGL functions are used to modify the operations that are performed when manipulating this state. Image processing can be performed by loading the input image in the form of a texture and drawing several vertices with an active texture, as a result of which a new image (or a more general new 2D data grid) is created.

In version> 2 (or with the correct ARB extensions), the operations performed on the image can be controlled using GLSL programs called vertex and fragment shaders (there are more shaders, but they are the oldest). The vertex shader will be called once to the vertex, the results of which are interpolated and sent to the fragment shader. A fragment shader will be called every time a new result (pixel) is written to the result.

Now it's all about reading and writing images, how to use it to detect an object? Use Vertices to cover the input texture throughout the viewport. Instead of calculating the rgb colors and saving them as a result, you can write a fragmentshader that calculates grayscale images / gradient images and then checks these textures for each pixel if the pixel is in the center of the loop with a specific size, part of the line, or just has a relatively high gradient compared to its surrounding (a good feature) or really anifing, you can find a good parallel algorithm. (did not do it myself)

The final result should be returned to the processor (sometimes you can use shaders to scale the data before that). OpenCL gives it less graphics than sensation and gives much more freedom, but less support.

+5
source

First of all, you need shader support ( GLSL or asm )

The usual way would be to render a full-screen square with your image (texture) and using a fragment shader. It is called Post-Processing. It is limited by the set of instructions and other restrictions that your equipment has. At the basic level of lvl, it allows you to apply a simple (one function) on a large data array in parallel, which will lead to the creation of another data set. But branching (if supported) is the first performance player since the GPU consists of a pair of SIMD blocks

+1
source

All Articles