Usually in Haskell we define Monad in terms of return and >>= . Sometimes itβs convenient to decompose >>= into fmap and join . The Monad laws for these two formulations are well known and quite intuitive once you get used to them.
There is another way to define monads in terms of the Applicative functor:
class Applicative f => MyMonad f where myJoin :: f (fa) -> fa
I am interested in laws for this kind of language. Obviously, we could simply adapt the fmap + join laws as follows (I'm not sure if the names are particularly relevant, but good):
myJoin . myJoin = myJoin . (pure myJoin <*>) ('Associativity') myJoin . pure = myJoin . (pure pure <*>) = id ('Identity')
Obviously, these conditions are enough for pure , (<*>) and myJoin to form a monad (in the sense that they guarantee that m `myBind` f = myJoin (pure f <*> m) will behave well >>= ). But are they also needed? It seems that the additional structure supported by Applicative above and above Functor can allow us to simplify these laws - in other words, that some functions from the above laws can be specified that it is known that pure and (<*>) already satisfy Applicative laws.
(In case you are wondering why we even bothered to worry about this wording, given one of two standard features: I'm not sure if all this is useful or insightful in the programming context, but it turns out if you use Monad to make the natural semantics of langauge .)
haskell monads applicative
Simon c
source share