See where f is called. It converts A to B , and the result is used as an argument to r . Therefore, r must expect a B ; we use the function A -> B to pre-process the input of the function (C, B) -> C , which leads to the function (C, A) -> C
In general, this reversal occurs whenever we transform the system to change its βinputβ. If we transform the system to change its "exit", there is no appeal 1 .
X -> A ---> A -> B ---> B -> Y
If I have a function A -> B , and I want to make something out of it that emits Y , I need to display the output function B -> Y above it. This is called covariance because the thing I want to change ( B in A -> B ) "varies depending on the" function I that maps it ( B -> Y ). It is said that B is in a positive position in A -> B
If I have a function A -> B , and I want to make something out of it that accepts instead of X , I need to display the function X -> A by entering it. This is called contravariance because the thing I want to change ( A in A -> B ) βchanges depending on theβ function I displayed above it β( X -> A ). It is said that A is in a negative position in A -> B
1 Higher-order programming means that we can transform the system to change the input signal in the output of something that contributes to the outpu of "our system ...! The terms" negative position "and" positive position "suggest that negative is negative the result is positive, etc.
source share