What is the meaning of -2 in this IL instruction?

I found a simple IL code :

 long x = 0; for(long i = 0;i< int.MaxValue * 2L; i++) { x = i; } Console.WriteLine(x); 

I create this code in Release mode and the IL code generated:

 .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 28 (0x1c) .maxstack 2 .locals init ([0] int64 x, [1] int64 i) IL_0000: ldc.i4.0 IL_0001: conv.i8 IL_0002: stloc.0 IL_0003: ldc.i4.0 IL_0004: conv.i8 IL_0005: stloc.1 IL_0006: br.s IL_000f IL_0008: ldloc.1 IL_0009: stloc.0 IL_000a: ldloc.1 IL_000b: ldc.i4.1 IL_000c: conv.i8 IL_000d: add IL_000e: stloc.1 IL_000f: ldloc.1 IL_0010: ldc.i4.s -2 IL_0012: conv.u8 IL_0013: blt.s IL_0008 IL_0015: ldloc.0 IL_0016: call void [mscorlib]System.Console::WriteLine(int64) IL_001b: ret } // end of method Program::Main 

I almost completely understand all the inscriptions, except for this:

  IL_0010: ldc.i4.s -2 

Now this add-in should push int.MaxValue * 2L onto the stack, and then blt.s will compare it with i , if i less than the value, go back to IL_0008 . But, t find out why it loads -2 ? If I changed the loop as shown below:

 for(long i = 0;i < int.MaxValue * 3L; i++) { x = i; } 

It loads the expected value:

 IL_0010: ldc.i8 0x17ffffffd 

What is the meaning of -2 in this code?

+8
c # cil il intermediate-language
source share
2 answers

int.MaxValue * 2L is a 64-bit number that still fits in 32-bit ( 4,294,967,294 or 0xFFFFFFFE ). So, the compiler loads 0xFFFFFFFE (which is -2 when interpreting Int32 ), and then expands it to an unsigned 64-bit value.

The reason she used the signed form is because the number, when interpreted as the sign value of -2 , fits into one signed byte ( -128 to 127 ), which means that the compiler was able to emit the short form ldc.i4.s opcode to load a 32-bit value from one byte. It took only 2 bytes and 1 more byte to load a 32-bit signed integer to convert it to a 64-bit value - this is much better than using a 64-bit load command followed by a full 8-byte unsigned integer.

+13
source share

It seems that the compiler uses bitwise math to its advantage. It so happened that the value of Two Complement is equal to the value of unsigned integers (int.MaxValue * 2L)

In a beaten view:

 - 1111 1111 1111 1111 1111 1111 1111 1110 (int) - 1111 1111 1111 1111 1111 1111 1111 1110 (uint) - 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1110 (long 
+3
source share

All Articles