Why do C ++ libraries often define their own primitive types?

I started using the OpenCL library recently, and I noticed that they use their own integer types like cl_int and cl_uint instead of int and unsigned int.

Why? Why don't they use default types in the language? Is this good practice or is there a practical reason for this (i.e. more readable code)?

+5
source share
3 answers

The reason this has been done in the past is portability. C and C ++ do not give concrete guarantees of the sizes of int , long and short , while library developers often require this.

A common solution is to define your own aliases for data types and change the definitions based on a specific platform, making sure to use a type of the appropriate size.

This problem arose in C and was addressed by adding the stdint.h header file (renamed cstdint in C ++). Enabling this header allows you to declare types int32_t , int16_t , etc. However, libraries developed before stdint.h and libraries that should be compiled on platforms that do not have this header use an old workaround.

+8
source

Often it happens that various configurations can be installed and distributed throughout the code.

For example, I know that Bullet 3 defines btScalar , which (among other things) is basically:

 #if defined(BT_USE_DOUBLE_PRECISION) typedef double btScalar; #else typedef float btScalar; #endif 

Thus, throughout the code, you could use btScalar instead of float or double , and all instances of this could be switched by defining or not defining this macro.

There are other analogues that handle the width of integer types , different character sets, and other platform-specific scripts.

+3
source

When defining their own types, they can reliably rely on the knowledge that these types will always be the same size.

Types can vary from platform to platform and compiler to compiler. Although STL provides <cstdint> , some developers prefer their own definitions because they do not want to use STL.

In most cases, you can assume that int will be 32 bits in size, but it can change and that where some developers prefer to define their own reliable types, it’s a coincidence that this may not be the case.

+3
source

All Articles