Suppose we have a case class (which can contain more than twelve members):
case class Foo(a: Int, b: Char, c: Symbol, d: String)
And we present the errors as strings and for convenience we have defined an alias of the type:
type ErrorOr[A] = ValidationNel[String, A]
We also have some validation results:
val goodA: ErrorOr[Int] = 1.success val goodB: ErrorOr[Char] = 'a'.success val goodC: ErrorOr[Symbol] = 'a.success val goodD: ErrorOr[String] = "a".success val badA: ErrorOr[Int] = "x".failNel val badC: ErrorOr[Symbol] = "y".failNel
Now we can write:
val foo = (Foo.apply _).curried val good: ErrorOr[Foo] = goodD <*> (goodC <*> (goodB <*> (goodA map foo))) val bad: ErrorOr[Foo] = goodD <*> (badC <*> (goodB <*> (badA map foo)))
What gives us what we want:
scala> println(good) Success(Foo(1,a,'a,a)) scala> println(bad) Failure(NonEmptyList(x, y))
In Haskell, it will be much prettier - you just write:
Foo <$> goodA <*> goodB <*> goodC <*> goodD
Scala's weaker type inference requires that we, unfortunately, write arguments in the wrong order.
Travis brown
source share