C ++: Is there a reason to use uint64_t instead of size_t

My understanding of size_t is that it will be large enough to hold any (integer) value that you can expect from it. (Perhaps this is a bad explanation?)

For example, if you used something like a for loop to iterate over all the elements in a vector, size_t will usually be 64 bits long (or at least on my system) so that it can hold all possible value returns from vector.size ().

Or at least I think that's right?

Therefore, is there a reason to use A rather than B:

A: for(uint64_t i = 0; i < v.size(); ++ i)

B: for(size_t i = 0; i < v.size(); ++ i)

If I am mistaken in my explanation or you have a better explanation, please feel free to edit.

Edit: I have to add that I understand that size_t behaves like a normal unsigned integer - maybe this is wrong?

+7
c ++ integer vector size-t
source share
5 answers

size_t is the return type of sizeof .

The standard says that it is a typedef of some unsigned integer, and large enough to hold the size of any possible object.
But it does not determine whether it is smaller, larger, or the same size as uint64_t (typedef for a 64-bit unsigned integer with a fixed width), and in the latter case, whether it is the same type.

So use size_t where is semantically correct.
As with size() a std::vector<T> ( std::vector gets the size_type from the used std::allocator<T> using size_t ).

+8
source share

uint64_t is guaranteed to be 64 bits. If you need 64 bits, you should use it.

size_t is not guaranteed for 64 bits, maybe 128 bits in one future computer. So, the keyword uint_64 is reserved for it :)

+5
source share

The correct case would be for(std::vector::size_type i ...

For the purpose of iterating through a vector or something like that, it will be difficult for you to find a case where size_t not large enough and uint64_t is,

Of course, on a 32-bit machine, size_t will usually be 32 bits, but you can deal with numbers over 4 billion, which will require more than 32 bits, and this is certainly case fo uint64_t . In other words, uint64_t guaranteed to be 64-bit, size_t not 64 bits in all machines / architectures.

+3
source share

std::size_t is defined as an unsigned integer type. Its length depends on the platform. v.size() will always return a value of type std::size_t , so option B is always valid.

+2
source share

No, size_t has absolutely nothing to do with the "contents of the integer value that you expect to be required for storage." Where did you take it?

size_t must be large enough to contain the byte size of any contiguous object in this implementation. Conceptually, this is much less than "any integer value." The language does not guarantee that you are allowed to create objects that occupy the entire address storage, which means that size_t conceptually insufficient even to store the number of addressable bytes of memory.

If you want to associate "any integer value" with the size of the memory, then the corresponding type will be uintptr_t , which is conceptually larger than size_t . But I see no reason to associate "any integer value" with the characteristics of memory in general. For example. even if uintptr_t larger than size_t , it is not guaranteed that it will be large enough to contain the size of the largest file in your platform file system.

The reason you can use size_t to iterate over std::vector elements is because the vector is internally based on an array. Arrays are continuous objects, so their sizes are covered by size_t . But once you take into account a disjoint container like std::list , size_t no longer guaranteed to be sufficient to measure or index such containers.

uint64_t may be lighter than size_t . But it is possible that you may have to work with integer values ​​that do not fit into uint64_t .

+2
source share

All Articles