Invalid OpenGL texture format for CL / GL-Interop?

I am trying to use OpenCL-OpenGL for textures on my GeForce 330M using the CUDA Toolkit 4.0.

I want to capture a frame, use this data as input image ( Image2D) in the OpenCL core. The kernel must manipulate the data and write it to Image2DGL, which is an image object with an OpenGL texture attached. It basically looks like this:

 _______________      RGB        _______________
|               |    uint8*     |               |   CL_RGBA / CL_UNORM_INT8
|   Grabber     | ------------> |   Image2D     | -------------------------.
|   avcodec     |               |   [input]     |                          |
|_______________|               |_______________|                          |
                                                                           |    
                                                                           V
 _______________                 _______________                       _______________
|               |               |               |                     |               |
|   Texture     | ------------> |   Image2DGL   | <-----------------> |    Kernel     |
|_______________|               |   [output]    |                     |_______________|
                                |_______________|
Internal
Format: GL_RGBA
Format: GL_RGBA
Type: ?

I initialize this texture:

GLuint tex = 0;

void initTexture( int width, int height )
{
    glGenTextures(1, &tex);
    glBindTexture(GL_TEXTURE_RECTANGLE, tex);
// now here is where I need assistance: The type parameter of the Texture (GL_FLOAT)
    glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, NULL );
} 


EDIT: I may have a type GL_UNSIGNED_INT.

Then I create a generic image ( Image2DGL):

texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_WRITE, GL_TEXTURE_RECTANGLE, 0, tex, &err));

Then I create the original image (input image):

ImageFormat format;
format.image_channel_data_type = CL_UNORM_INT8;
format.image_channel_order = CL_RGBA;
srcImgBuffer = Image2D(clw->context, CL_MEM_READ_WRITE, format, width, height, 0, NULL, &err);

In each rendering cycle, I write data to srcImgBuffer:

// write the frame to the image buffer
clw->queue.enqueueWriteImage(srcImgBuffer, CL_TRUE, origin, region, 0, 0, (void*)data, NULL, NULL);

I also set the arguments for the kernel:

tex_kernel.setArg(0, texMems[0]);
tex_kernel.setArg(1, srcImgBuffer);
tex_kernel.setArg(2, width);
tex_kernel.setArg(3, height);

Before and after I acquire and release GL objects. The testing core looks like this:

__kernel void init_texture_kernel(__write_only image2d_t out, __read_only image2d_t in, int w, int h)
{
    const sampler_t smp = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;

    int2 coords = { get_global_id(0), get_global_id(1) };
    float4 pixel = read_imagef(in, smp, coords);
    float4 test = { (float)coords.x/(float)w , 0, 0, 1};
    write_imagef( out, coords, pixel );
}

image_channel_data_typecan be read as a float in the kernel and interpreted as a normalized value. The output image does not look right, I have a sliced ​​snapshot (linewise), obviously due to a misinterpretation of the data. As I mentioned, I assume that the error lies in initializing the texture type. I tried GL_FLOAT (as I write as a float for an image in the kernel).

Results: Image as PPM dump from the decoder (left), Texture output scattered (right)

The left one is PPMoutside the decoder, the right one is that I return to the output texture.

If someone really read here: do you have any suggestions regarding the type of texture to solve the problem?


EDIT: If I snap the captured frames to the texture directly, the video will play normally. Therefore, it must be connected to the CL-GL interface.
+5
2

, . RGB-. RGB. . , float4 ( ), , , , . Argh.

, : 384 512 , .75.

, grabber ( libavcodec) RGBA, , .

.

+1

, glTexture2D NULL, / OpenGL - . , , internal_format:

glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, NULL);
+2

All Articles