Null combined with operator consequences?

A while back I compiled two versions of the code using (Nullable<T>)x.GetValueOrDefault(y) and one using (Nullable<T>)x ?? y) (Nullable<T>)x ?? y) .

After decompiling in IL, I noticed that the null coalesce statement is converted to a GetValueOrDefault call.

Since this is a method call to which you can pass an expression that evaluates before the method executes, y seems to always be executed.

For example:

 using System; public static class TestClass { private class SomeDisposable : IDisposable { public SomeDisposable() { // Allocate some native resources } private void finalize() { // Free those resources } ~SomeDisposable() { finalize(); } public void Dispose() { finalize(); GC.SuppressFinalize(this); } } private struct TestStruct { public readonly SomeDisposable _someDisposable; private readonly int _weirdNumber; public TestStruct(int weirdNumber) { _weirdNumber = weirdNumber; _someDisposable = new SomeDisposable(); } } public static void Main() { TestStruct? local = new TestStruct(0); TestStruct local2 = local ?? new TestStruct(1); local2._someDisposable.Dispose(); } } 

It seems like this leads to an underdeveloped object and probably to performance consequences.

First of all, is it true? Or does JIT or something similar modify the actually executed ASM code?

And secondly, can anyone explain why he has such behavior?

NOTE: This is just an example, it is not based on real code and please refrain from comments such as "this is bad code."

IL DASM:
Well, when I compiled this with the .Net Framework 2.0, it led to identical code calling null coalesce and GetValueOrDefault. With .Net Framework 4.0, it generates these two codes:

GetValueOrDefault:

 .method private hidebysig static void Main() cil managed { .entrypoint // Code size 19 (0x13) .maxstack 2 .locals init ([0] valuetype [mscorlib]System.Nullable`1<int32> nullableInt, [1] int32 nonNullableInt) IL_0000: nop IL_0001: ldloca.s nullableInt IL_0003: initobj valuetype [mscorlib]System.Nullable`1<int32> IL_0009: ldloca.s nullableInt IL_000b: ldc.i4.1 IL_000c: call instance !0 valuetype [mscorlib]System.Nullable`1<int32>::GetValueOrDefault(!0) IL_0011: stloc.1 IL_0012: ret } // end of method Program::Main 

Null Coalesce:

 .method private hidebysig static void Main() cil managed { .entrypoint // Code size 32 (0x20) .maxstack 2 .locals init (valuetype [mscorlib]System.Nullable`1<int32> V_0, int32 V_1, valuetype [mscorlib]System.Nullable`1<int32> V_2) IL_0000: nop IL_0001: ldloca.s V_0 IL_0003: initobj valuetype [mscorlib]System.Nullable`1<int32> IL_0009: ldloc.0 IL_000a: stloc.2 IL_000b: ldloca.s V_2 IL_000d: call instance bool valuetype [mscorlib]System.Nullable`1<int32>::get_HasValue() IL_0012: brtrue.s IL_0017 IL_0014: ldc.i4.1 IL_0015: br.s IL_001e IL_0017: ldloca.s V_2 IL_0019: call instance !0 valuetype [mscorlib]System.Nullable`1<int32>::GetValueOrDefault() IL_001e: stloc.1 IL_001f: ret } // end of method Program::Main 

As it turns out, this is no longer the case, and that it misses the GetValueOrDefault call in general when HasValue returns false.

+8
c # nullable
source share
1 answer

After decompiling in IL, I noticed that the null coalesce statement is converted to a GetValueOrDefault call.

x ?? y x ?? y converts to x.HasValue ? x.GetValueOrDefault() : y x.HasValue ? x.GetValueOrDefault() : y . It will not convert to x.GetValueOrDefault(y) , and that would be a compiler error if that were the case. You are right, y should not be evaluated if x not null, but it is not.

Edit: if y can be proved without side effects (where “side effect” includes “throw an exception”), then converting to x.GetValueOrDefault(y) will not necessarily be erroneous, but it still that I don’t think the compiler does : There are not many situations where this optimization would be useful.

+6
source share

All Articles