MSVC ++: weirdness with unsigned ints and overflow

I have the following code:

#include <iostream> using namespace std; int main(int argc, char *argv[]) { string a = "a"; for(unsigned int i=a.length()-1; i+1 >= 1; --i) { if(i >= a.length()) { cerr << (signed int)i << "?" << endl; return 0; } } } 

If I compiled in MSVC with full optimization, the output I get is "-1?". If I compile in debug mode (without optimization), I do not get output (expected.)

I thought the standard ensures that unsigned integers are overflowed in a predictable way, so when I = (unsigned int) (- 1), I + 1 = 0, and the loop condition I + 1> = 1 fails. Instead, the test somehow passes. Is this a compiler error, or am I doing something undefined somewhere?

+7
c ++ standards
source share
4 answers

I remember this problem in 2001. I am surprised that she is still there. Yes, this is a compiler error.

The optimizer sees

 i + 1 >= 1; 

Theoretically, we can optimize this by putting all the constants on one side:

 i >= (1-1); 

Since i has no sign, it will always be greater than or equal to zero.

See a discussion of this newsgroup here .

+8
source share

ISO14882: 2003, clause 5, clause 5:

If during the evaluation of an expression the result is not determined mathematically or not in the range of represented values ​​for its type, the behavior is undefined if such an expression is not a constant expression (5.19), in which if the program is poorly formed.

(Emphasize mine.) So, yes, the behavior is undefined. The standard does not give any guarantees of behavior in case of integer over / underflow.

Edit: The standard seems a bit controversial on this issue elsewhere.

Section 3.9.1.4 states:

Unsigned unsigned integers must obey the laws of arithmetic modulo 2 n, where n is the number of bits in the representation of the values ​​of this particular integer size.

But sections 4.7.2 and .3 say:

2) If the destination type is not specified, the resulting value is the smallest unsigned integer comparable to the original integer (modulo 2 n, where n is the number of bits used to represent the unsigned type). [Note. In a view with two additions, this transformation is conceptual and there are no changes in the bit scheme (if there is no truncation). ]

3) If the destination type is signed, the value does not change if it can be represented in the destination type (and the width of the bit field); otherwise, the value is determined by the implementation.

(Emphasize mine.)

+4
source share

I am not sure, but I think that you are probably mistaken in a mistake.

I suspect the problem is how the compiler handles the for control. I could imagine an optimizer:

 for(unsigned int i=a.length()-1; i+1 >= 1; --i) // As written for (unsigned int i = a.length()-1; i >= 0; --i) // Noting 1 appears twice for (unsigned int i = a.length()-1; ; --i) // Because i >= 0 at all times 

This is what happens, this is another matter, but this may be enough to confuse the optimizer.

You should probably use a more standard loop form:

 for (unsigned i = a.length()-1; i-- > 0; ) 
+1
source share

Yup, I just tested this on Visual Studio 2005, it definitely behaves differently in Debug and Release. I wonder if he is correcting 2008.

Interestingly, he complained about your implicit cast from size_t (result .length) to unsigned int, but has no problem creating bad code.

0
source share

All Articles