In addition to the few subtle answers you already have, there is a very important difference between the exception filter and the “if” in the catch block: filters run before the internal finally blocks .
Consider the following:
void M1() { try { N(); } catch (MyException) { if (F()) C(); } } void M2() { try { N(); } catch (MyException) when F() { C(); } } void N() { try { MakeAMess(); DoSomethingDangerous(); } finally { CleanItUp(); } }
The order of calls differs between M1 and M2 .
Suppose that M1 is called. It calls N (), which calls MakeAMess (). A mess is being made. Then DoSomethingDangerous () throws a MyException. Runtime checking checks to see if there is any catch block that can handle this. The finally block runs CleanItUp (). The confusion is being cleared. Control passes to the catch block. And the catch block calls F (), and then possibly C ().
How about M2? It calls N (), which calls MakeAMess (). A mess is being made. Then DoSomethingDangerous () throws a MyException. Runtime checking checks to see if there is any catch block that can handle this, and there is - it is possible. Runtime calls F () to see if the catch block can handle it, and it can. The finally block runs CleanItUp (), control passes to catch, and C () is called.
Did you notice the difference? In the case of M1, F () is called after the mess is cleared, and in the case of M2, it is called before the mess is cleared. If F () depends on the lack of clutter for its correctness, then you have big problems if you reorganize M1 to look like M2!
There are more than problems with correctness; there are security implications. Suppose the “mess” we create represents the administrator, a dangerous operation requires administrator access, and cleaning does not represent the administrator. In M2, calling F has administrator privileges. In M1, this is not so. Suppose the user has granted several privileges to an assembly containing M2, but N is in the full trust assembly; potentially hostile code in the M2 assembly can gain administrator access through this bait.
As an exercise: how would you write N so that it defends itself from this attack?
(Of course, the runtime is smart enough to know if there are stack annotations that grant or deny privileges between M2 and N and return them before calling F. This is the mess that the executable was executing, and it knows how to handle it correctly but the runtime is not aware of any other mess you made.)
The key conclusion here is that at any time when you handle the exception, by definition, something went terribly wrong and the world is not what you think. Exclusion filters should not depend on their correctness from invariants that were violated by an exceptional condition.
UPDATE:
Ian Rinrose asks how we got into this mess.
This part of the answer will be somewhat hypothetical, as some of the design decisions described here were taken after I left Microsoft in 2012. However, I have repeatedly talked with the developers of the language about these problems, and I think I can give a fair presentation of the situation.
The decision to create filters is made before the final blocks were made in the earliest days of the CLR; person to ask if you want the fine details of this design decision to be Chris Brumme. (UPDATE: Unfortunately, Chris is no longer available for questions.) He had a blog with a detailed exegesis of the exception handling model, but I don't know if he is still on the Internet.
This is a smart decision. For debugging purposes, we want to know before the final blocks are triggered, whether this exception will be handled, or if we are in the "undefined behavior" scenario of a completely unhandled exception that destroys the process. Because if the program is running in the debugger, the undefined behavior will include a break at the point of the unhandled exception before finally blocks are run.
The fact that this semantics introduces security and correctness issues has been well understood by the CLR team; I actually discussed this in my first book, which was posted many years ago and twelve years ago on my blog:
https://blogs.msdn.microsoft.com/ericlippert/2004/09/01/finally-does-not-mean-immediately/
And even if the CLR team wanted to, it would be a terrific change to “fix” semantics.
This function has always existed in CIL and VB.NET, and the attacker controls the language for implementing the code using a filter, so the introduction of this function in C # does not add any new attack surface.
And the fact that this feature, which introduces the security problem, has been “in the wild” for several decades and, as far as I know, has never been the cause of a serious security problem, indicates that it is not a very fruitful opportunity for attackers.
Why then was there a function in the first version of VB.NET and took more than a decade to turn it into C #? Well, “why not” such questions are difficult to answer, but in this case I can do it quite easily: (1) we had many other things on our mind, and (2) Anders considers this feature unattractive. (And I, too, am not enthusiastic about this.) For many years, this has moved him to the list of priorities.
How did he make it high enough in the priority list to be implemented in C # 6? Many people have requested this feature, which always indicates this. VB already had this, and the C # and VB teams wanted to have parity when possible, at a reasonable price, so that is the point. But there was a big turning point: there was a scenario in the Roslyn project where exclusivity filters would be really useful. (I don’t remember what it was: go to the source code if you want to find it and report!)
As a language developer as well as a compiler developer, you should be careful not to give priority to functions that make life easier for a compiler writer; most C # users are not compilers, and they are clients! But ultimately, having a set of real-world scenarios where this function is useful, including some that annoyed the compiler team, balanced the balance.