The link in the now deleted answer to my article, which explains the exact rules for determining validity, does not answer your question. The link you are really looking for is my article on why the C # compiler team rejected an attempt to calculate the variance without any syntax that is here:
http://blogs.msdn.com/b/ericlippert/archive/2007/10/29/covariance-and-contravariance-in-c-part-seven-why-do-we-need-a-syntax-at- all.aspx
In short, the reasons for rejecting such a function:
- Function requires full program analysis. This is not only expensive, it means that small changes of the same type can lead to an unexpected change in the choices of many distant types.
- Deviation is what you want to create for the type; this statement is about how you expect the type to be used by its users not only today, but forever. This expectation should be encoded in the program text.
- There are many cases where it is very difficult to calculate the user's intention, and then what do you do? You have to solve this by requiring syntax, and why not just require it all the time? For instance:
interface I<V, W> { I<V, W> M(I<W, V> x); }
As an exercise, figure out what all possible valid annotation variations are on V and W. Now, how should the compiler do the same calculation? What algorithm did you use? And secondly, given that this is ambiguous, how would you decide to resolve ambiguity?
Now I note that this answer still does not answer your question. You asked how this can be done, and all that I gave you was the reason why we should not try to do it. There are many ways to do this.
For example, take each generic type in the program and each type parameter of these common types. Suppose there are hundreds of them. Then there are only three hundred possible combinations of invariants, inside and outside for each; try all of them, see which ones work, and then get a ranking function that picks the winners. Of course, the problem is that it takes longer to launch than the age of the universe.
Now we can apply the smart cropping algorithm to say "any choice where T is located and, as you know, is also used in the output position is invalid", so do not check any of these cases. Now we have a situation where we have hundreds of such predicates that should be applied to determine what the applicable set of variance values ββis. As I noted in the above example, itβs quite difficult to determine if something is really in the input or output position. So this is probably not a starter either.
Ah, but this idea implies that predicting an algebraic system is a potentially good technique. We could create an engine that generates predicates and then apply a complex SMT solver to it. It would have bad cases that would require gazillions of computation, but modern SMT solvers are pretty good in typical cases.
But all this is true, there is too much work for a function that practically does not matter to the user.