Was it supposed that `int` will always be 32 bits in OpenCV?

It seems that in OpenCV the data type is intalways considered 32 bits. This is reflected in the documentation (for example, in the introduction ), as well as in the source code (for example, in the comments, modules/core/include/opencv2/core/cvdef.hand the fact that it defines uintas a 32-bit unsigned integer, but does not determine the corresponding signed type).

How does this not violate OpenCV on systems that are intnot 32 bits? Afterall, intonly 16 bits are guaranteed by standard.

I would expect OpenCV to define data types for all sizes it uses (as for int64), or use uint_8friends as well.

+4
source share
1 answer

How does this not violate OpenCV on systems in which int is not 32 bits?

Probably yes. You should try to build such a system to be sure. Again, I wish you the best of luck finding a system that still has enough memory and processor power to create meaningful computer vision; 16-bit is intusually found on very small embedded systems these days.

A pure way to get a fast type with a width of at least 32 bits is to use a type int_fast32_tfrom <stdint.h>, but this requires support for C99, and the Microsoft C compiler has not supported this standard for a long time.

+1
source

All Articles