Why is there GLint and GLfloat?

I understand that Open GL needs to use numbers, but why not just use regular ints and float or pre-existing wrapper classes (depending on what the whole world of Open GL requires in order to blend well)? Is there a difference besides the name and the one used exclusively in Open GL, or are they almost identical with the other name?

+7
source share
1 answer

Since int is (waaaay simplifies here) 32 bits in a 32-bit system and 64 bits in a 64-bit system - so even just "int" is not a universal concept. Keep in mind that the graphics hardware is different than your processor, and there is a need for new types. Using its own typedef, OpenGL can guarantee the correct placement of the correct number of bits when sending data to your graphics card.

You could do this with conversion functions that abstract away from the mess of β€œdifferent ints,” but this can lead to a performance penalty, which is usually unacceptable when you talk about every number that goes to and from the video card.

tl; dr when using the "int" that you write using your processor. When using "GLInt" you write the hardware of your video card.

EDIT: as pointed out in the comments, on a 64-bit int processor it can (and probably will) 32 bits for compatibility reasons. Historically, thanks to 8, 16, and 32-bit hardware, this was the processor's own size, but technically it is regardless of how the compiler feels when using machine code to create it. Stands for @Nicol Bolas and @Mark Dickinson

+19
source

All Articles