There is a standard template for events in .NET - they use the delegate type, which takes a simple object called the sender, and then the actual "payload" in the second parameter, which should be obtained from EventArgs .
The rationale for the second parameter derived from EventArgs seems pretty clear (see the .NET Framework Annotated Reference ). It is designed to provide binary compatibility between event receivers and sources in software development. For each event, even if it has only one argument, we derive a class of user event arguments that has one property containing this argument, so we retain the ability to add additional properties to the payload in future versions without violating the existing client code. Very important in an ecosystem of self-developed components.
But I find that the same goes for null arguments. This means that if I have an event that has no arguments in my first version, and I write:
public event EventHandler Click;
... then I do it wrong. If I change the delegate type in the future to a new class as its payload:
public class ClickEventArgs : EventArgs { ...
... I will break binary compatibility with my clients. The client ends with a binding to a specific overload of the add_Click internal method that accepts the EventHandler , and if I change the delegate type, they cannot find this overload, so there is a MissingMethodException .
Ok, so what if I use the convenient generic version?
public EventHandler<EventArgs> Click;
No, itβs still wrong, because EventHandler<ClickEventArgs> not EventHandler<EventArgs> .
So, in order to take advantage of EventArgs , you have to extract from it, and not use it directly as it is. If you do not, you can not use it (it seems to me).
Then there is the first argument, sender . It seems to me like a recipe for an unholy connection. Dismissal by event is a function call. Should a function, generally speaking, be able to return through the stack and find out who was the caller and adjust its behavior accordingly? Should we indicate that interfaces should look like this?
public interface IFoo { void Bar(object caller, int actualArg1, ...); }
In the end, the Bar developer might want to know who caller , so they might ask for more information! I hope you are trying now. Why should it be different for events?
So, even if Iβm ready to take the pain of making a separate EventArgs -derived class for each event being declared, just to make it useful when using EventArgs in general, I would definitely prefer to discard the object sender argument.
Visual Studio's autocomplete doesn't seem to care about which delegate you are using for the event - you can type += [hit Space, Return] and write a handler method for you that matches any delegate it is.
So, what value would I lose by deviating from the standard template?
As a bonus question, will C # / CLR 4.0 change anything, possibly through contravariance in delegates? I tried to research this, but hit another issue . I originally included this aspect of the question in this other question, but it caused confusion. And it seems to split it a little into three questions ...
Update:
It turns out that I was right to learn about the effect of contravariance on this whole issue!
As noted elsewhere , the new compiler rules leave a hole in the type system that explodes at runtime. The hole is effectively connected by defining EventHandler<T> in different ways Action<T> .
So, for events, to avoid this hole, you should not use Action<T> . This does not mean that you should use EventHandler<TEventArgs> ; it just means that if you use the generic delegate type, do not select the one that is included for contravariance.