Method * :
This returns the default projection - this is how you describe:
'all columns (or calculated values) usually interest me.
Your table may have several fields; you only need a subset of your default projection. The default projection must match the type of table parameters.
Letβs take it one by one. Without content <> just * :
// First take: Only the Table Defintion, no case class: object Bars extends Table[(Int, String)]("bar") { def id = column[Int]("id", O.PrimaryKey, O.AutoInc) def name = column[String]("name") def * = id ~ name // Note: Just a simple projection, not using .? etc } // Note that the case class 'Bar' is not to be found. This is // an example without it (with only the table definition)
Just a table definition like this will allow you to make queries like:
implicit val session: Session = // ... a db session obtained from somewhere // A simple select-all: val result = Query(Bars).list // result is a List[(Int, String)]
the default projection (Int, String) results in List[(Int, String)] for simple queries such as these.
// SELECT b.name, 1 FROM bars b WHERE b.id = 42; val q = for (b <- Bars if b.id === 42) yield (b.name ~ 1) // yield (b.name, 1) // this is also allowed: // tuples are lifted to the equivalent projection.
What type of q ? This is a Query with a projection (String, Int) . When called, it returns a List Roots (String, Int) according to the projection.
val result: List[(String, Int)] = q.list
In this case, you have identified the desired forecast in the yield clause of the for understanding.
Now about <> and Bar.unapply .
This provides the so-called Mapped Projections.
So far, we have seen how slick allows you to express queries in Scala that return the projection of columns (or computed values); Therefore, when performing these queries, you should think of the query result string as a Scala tuple . The type of the tuple will correspond to the forecast that is defined (by your for , as in the previous example, by default * ). This is why field1 ~ field2 returns the projection Projection2[A, B] where A is the type field1 and B is the type field2 .
q.list.map { case (name, n) => // do something with name:String and n:Int } Queury(Bars).list.map { case (id, name) => // do something with id:Int and name:String }
We are dealing with tuples, which can be bulky if we have too many columns. We would like to think about the results not as TupleN , but about some object with named fields.
(id ~ name)
How it works? <> takes a projection Projection2[Int, String] and returns a displayed projection of type Bar . Two arguments Bar, Bar.unapply _ tell how this projection (Int, String) should be mapped to the case class.
This is a two way display; Bar is the constructor of the case class, so the information needed to go from (id: Int, name: String) to Bar . And unapply if you guessed it, this is for the opposite.
Where is unapply ? This is the standard Scala method available for any regular case class - only the definition of Bar gives you Bar.unapply , which is an extractor that you can use to return the id and name that Bar was created with
val bar1 = Bar(1, "one")
So, your default projection can be mapped to the case class that you most expect to use:
object Bars extends Table[Bar]("bar") { def id = column[Int]("id", O.PrimaryKey, O.AutoInc) def name = column[String]("name") def * = id ~ name <>(Bar, Bar.unapply _) }
Or you can even get it for a query:
case class Baz(name: String, num: Int) // SELECT b.name, 1 FROM bars b WHERE b.id = 42; val q1 = for (b <- Bars if b.id === 42) yield (b.name ~ 1 <> (Baz, Baz.unapply _))
Here q1 type is Query with projection projected onto Baz . When called, it returns a List of Baz objects:
val result: List[Baz] = q1.list
Finally, aside .? offers Option Lifting - Scala way that can not be.
(id ~ name) // Projection2[Int, String] // this is just for illustration (id.? ~ name) // Projection2[Option[Int], String]
That, wrapping, will work well with your original definition of Bar :
case class Bar(id: Option[Int] = None, name: String)
In response to a comment on how Slick uses a for understanding:
Somehow, monads can always appear and demand to be part of the explanation ...
Understanding does not apply only to collections. They can be used in any kind of Monad, and collections are just one of many types of Monads available in Scala.
But since the collections are familiar, they make a good start. Indicate for explanation:
val ns = 1 to 100 toList; // Lists for familiarity val result = for { i <- ns if i*i % 2 == 0 } yield (i*i) // result is a List[Int], List(4, 16, 36, ...)
Scala uses syntactic sugar for understanding a method (possibly a nested) method: the code above is (more or less) equivalent to:
ns.filter(i => i*i % 2 == 0).map(i => i*i)
Basically, everything with filter , map , flatMap methods (in other words, Monad) can be used in for understanding instead of ns . A good example is Option monad . Here's the previous example where the same for statement works as on a List , as well as Option monads:
// (1) val result = for { i <- ns // ns is a List monad i2 <- Some(i*i) // Some(i*i) is Option if i2 % 2 == 0 // filter } yield i2 // Slightly more contrived example: def evenSqr(n: Int) = { // return the square of a number val sqr = n*n // only when the square is even if (sqr % 2 == 0) Some (sqr) else None } // (2) result = for { i <- ns i2 <- evenSqr(i) // i2 may/maynot be defined for i! } yield i2
In the last example, the conversion might look like this:
// 1st example val result = ns.flatMap(i => Some(i*i)).filter(i2 => i2 %2 ==0) // Or for the 2nd example result = ns.flatMap(i => evenSqr(i))
In Slick, requests are monadic - they are just objects with the map , flatMap and filter methods. So, understanding for (shown in the explanation of the * method) just translates to:
val q = Query(Bars).filter(b => b.id === 42).map(b => b.name ~ 1)
As you can see, flatMap , map and filter are used to generate a Query re-transforming Query(Bars) with each call to filter and map . In the case of a collection, these methods actually iterate and filter the collection but in Slick they are used to generate SQL. Read more here: How does Scala Slick translate Scala code into JDBC?