C # bit shift: is this behavior in the specification, error or random?

I worked with bit-shift operators (see my Bit Array Equality question ), and the SO user pointed out an error in my calculation of my shift operand - I calculated the range [1.32] instead of [0.31] for int. (Hooray for the SO community!)

When fixing the problem, I was surprised to find the following behavior:

-1 << 32 == -1

Actually, it seems that the n << sCLR is compiled (or interpreted - I did not check IL) as n << s % bs(n), where bs (n) = size in bits of n.

I would expect:

-1 << 32 == 0

It would seem that the compiler understands that you are moving beyond the bounds of the target and correcting your mistake.

This is a purely academic question, but does anyone know if this is defined in the specification (I could not find anything in 7.8 Shift Operators ), is it just a random fact of undefined behavior, or is there a case where this may cause an error?

+5
source share
1 answer

I believe that the relevant part of the specification is here:

For predefined operators, the number of shift bits is calculated as follows:

  • When type x is int or uint, the shift counter is set to the lower five bits of the count. In other words, the shift count is calculated from count and 0x1F.

  • When type x is long or ulong, the shift counter is set to the lower six bits of the count. In other words, the shift count is calculated from count and 0x3F.

, .

32 0x20. 0x20 & 0x1F 0. , , ; -1 << 32 ( x << 32) .

+9

All Articles