Int32 has an Equals(Int32) overload, and Int16 can be implicitly converted to an Int32 equivalent. With this overload, he now compares two 32-bit integers, checks for equality of values, and naturally returns true .
Int16 has its own Equals(Int16) method Equals(Int16) , but there is no implicit conversion from Int32 to Int16 (because you can have values โโthat are out of range for a 16-bit integer). Thus, the type system ignores this overload and returns to the overload of Equals(Object) . His documentation reports:
true if obj is an Int16 instance and is equal to the value of this example; otherwise false.
But the value that we pass, while it is "equal to the value of this instance" ( 1 == 1 ), it is not an instance of Int16 , because it is Int32 .
The equivalent code for b.Equals(a) that you will have is as follows:
Int16 a = 1; Int32 b = 1; Int32 a_As_32Bit = a; //implicit conversion from 16-bit to 32-bit var test1 = b.Equals(a_As_32Bit); //calls Int32.Equals(Int32)
Now itโs clear that we are comparing both numbers as 32-bit integers.
The equivalent code for a.Equals(b) would look like this:
Int16 a = 1; Int32 b = 1; object b_As_Object = b;
Now itโs clear that we are calling another method of equality. Internally, this equality method does more or less the following:
Int16 a = 1; Int32 b = 1; object b_As_Object = b; bool test2; if (b_As_Object is Int16) //but it not, it an Int32 { test2 = ((Int16)b_As_Object) == a; } else { test2 = false; //and this is where your confusing result is returned }
Chris sinclair
source share