Two compilers are discussed here: the C # compiler, which turns C # into IL, and the IL compiler, which turns IL into machine code, is called jitter because it happens Just In Time.
The Microsoft C # compiler, of course, does not do such an optimization. A method call is generated as method calls, the end of the story.
Jitter is allowed to perform the optimization you describe, provided that it cannot be detected. For example, suppose you had:
y = M() != 0 ? M() : N()
and
static int M() { return 1; }
Jitter is allowed to turn this program into:
y = 1 != 0 ? 1 : N()
or for that matter
y = 1;
Whether jitter is this or not is an implementation detail; you will have to ask the jitter expert if he really does this optimization if you are interested.
Similarly, if you have
static int m; static int M() { return m; }
that jitter could optimize this in
y = m != 0 ? m : N()
or even in:
int q = m; y = q != 0 ? q : N();
because jitter is allowed to rotate two field reads in a line without an intermediate record into one field, read, provided that the field is not mutable. Again, whether he does it or not, this is an implementation detail; ask the jitter developer.
However, in the last example, jitter cannot overcome the second challenge because it has a side effect.
I ran something very similar to this first list of codes and set a breakpoint in the GetOtherThing instance method. One breakpoint has been deleted.
This is very unlikely. Almost all optimizations are turned off when you are debugging, just so that they are easier to debug. As Sherlock Holmes never said when you eliminate the unbelievable, the most likely explanation is that the original poster was wrong.