OpenGl data glTexImage2D

I am trying to read floating point numbers from a CSV file that contains a pre-computed texture, stores it in a 1-dimensional array and then puts this data in a 2-dimensional texture. I need to make sure the following code does this because I have problems accessing the data and I cannot figure out where the error is:

 // Allocate memory float * image = new float [width * height * 3 ]; for( int i = 0; i < height; i++) { for( int j = 0; j < width-1; j++) { fscanf( fDataFile, "%f,", &fData ); image[ 4 * i * j + 0 ] = fData; image[ 4 * i * j + 1 ] = fData; image[ 4 * i * j + 2 ] = fData; } fscanf( fDataFile, "%f", &fData ); image[ 4 * i * width-1 + 0 ] = fData; image[ 4 * i * width-1 + 1 ] = fData; image[ 4 * i * width-1 + 2 ] = fData; } 

There shouldn't be a problem here, but the following worries me:

 // create the texture glGenTextures(1, &texHandle); glBindTexture(GL_TEXTURE_2D, texHandle); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_FLOAT, &image[0]); 

Is it possible to simply point the glTexImage2D pointer to my one-dimensional array?
the array size is width * height * 3, and the texture format should be width * height with 3 channels ... so the size should be fine, I think! However, my program will not work as expected, and this is one of the potential sources of errors.

+4
source share
2 answers

Hi guys, I decided that my spoiled textures are reading ... I don't know what got into me, but the initialization of my array was pure nonesense. here is the corrected code, I found out when I tried to write a test texture:

 // Allocate memory float * image = new float [width * height * 3 ]; for( int i = 0; i < height; i++) { for( int j = 0; j < width-1; j++) { fscanf( fDataFile, "%f,", &fData ); image[ 3 * (i * width + j) + 0 ] = fData; image[ 3 * (i * width + j) + 1 ] = fData; image[ 3 * (i * width + j) + 2 ] = fData; //image[ 4 * i * j + 2 ] = 1.0f; } fscanf( fDataFile, "%f", &fData ); image[ 3 * (i * width + width-1) + 0 ] = fData; image[ 3 * (i * width + width-1) + 1 ] = fData; image[ 3 * (i * width + width-1) + 2 ] = fData; //image[ 4 * i * width-1 + 2 ] = 1; } 

In addition, it will now work regardless of the internal format. GL_RGB, GL_RGBA, GL_RGB32F and GL_RGBA32F everything works fine without changing the way I read my texture.

Thanks everyone!

+5
source
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_FLOAT, &image[0]); 

You must use the internal floating point format . For example, GL_RGB32F . This should be the third parameter.

+2
source

All Articles