def inc(l: List[Int]): List[Int] = l.map(_ + 1)
inc(List(1, 2, 3))
defined function inc res0_1: List[Int] = List(2, 3, 4)
In this chapter, we explore the ideas of abstract data types further. In particular, we see that we can generalize from collections with higher-order methods to other kinds of data structures with such methods that intuitively encapsulate computational effects.
Recall that an abstract data type is a data type whose representation is abstract and unavailable to the client.
It is a concept that we have seen multiple times, for example, the Map
and Set
types in the Scala standard library abstracts the interface of a mathematical finite map or a finite set, respectively, whose actual representation is abstracted from the client of the library. This enables the library to maintain a representation invariant on behalf of the client (e.g., representing a finite map as a balanced binary search tree to maintain logarithmic lookup, insertion, and deletion).
We have seen that a library can provide higher-order methods to expose a view of the data type without exposing the internal representation (see Section 24.3). For example, using the map
method on a List
is a convenience
def inc(l: List[Int]): List[Int] = l.map(_ + 1)
inc(List(1, 2, 3))
defined function inc res0_1: List[Int] = List(2, 3, 4)
over direct recursion
def inc(l: List[Int]): List[Int] = l match {
case Nil => Nil
case h :: t => (h + 1) :: inc(t)
}
inc(List(1, 2, 3))
defined function inc res1_1: List[Int] = List(2, 3, 4)
The view of a Set
as a collection is only accessible via map
and related higher-order methods:
def inc(s: Set[Int]): Set[Int] = s.map(_ + 1)
inc(Set(1, 2, 3))
defined function inc res2_1: Set[Int] = Set(2, 3, 4)
Recall that distinction between effect-free and effect-ful computation. An effect-free (or pure or referentially transparent) computation is an evaluation of an expression that does not do anything external to its final value, while an effect-ful (or side-effecting) computation does do something that is not visible in its final value.
Perhaps the most basic effect is the possibility of error. For example, consider a function
: String => Double toDoubleException
that parses a string as a floating-point number and converts it into a double:
def toDoubleException(s: String): Double = s.toDouble
toDoubleException("1")
toDoubleException("4.2")
defined function toDoubleException res3_1: Double = 1.0 res3_2: Double = 4.2
Of course, not all strings correspond to floating-point numbers, so toDoubleException
throws an exception if it is unable to recognize the string as a floating-point number:
toDoubleException("hello")
java.lang.NumberFormatException: For input string: "hello" jdk.internal.math.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2054) jdk.internal.math.FloatingDecimal.parseDouble(FloatingDecimal.java:110) java.lang.Double.parseDouble(Double.java:543) scala.collection.StringOps$.toDouble$extension(StringOps.scala:930) ammonite.$sess.cmd3$Helper.toDoubleException(cmd3.sc:1) ammonite.$sess.cmd4$Helper.<init>(cmd4.sc:1) ammonite.$sess.cmd4$.<clinit>(cmd4.sc:7)
This throwing of an exception is a side-effect because it is not captured in the return type Double
. That is, all Scala expressions have the possibility of throwing an exception to bypass the expected type of the expression, so Scala is not pure language.
With some discipline, we can attempt to program in a pure subset of Scala where we make explicit the possibility of error. For example, we can instead define
: String => Option[Double] toDoubleOption
to return an Option
:
def toDoubleOption(s: String): Option[Double] =
try { Some(s.toDouble) } catch { case _: NumberFormatException => None }
toDoubleOption("1")
toDoubleOption("4.2")
toDoubleOption("hello")
defined function toDoubleOption res5_1: Option[Double] = Some(value = 1.0) res5_2: Option[Double] = Some(value = 4.2) res5_3: Option[Double] = None
An Option[A]
type represents an optional value, and it is often used to represent possible error in computing a result of type A
. That is, either return Some(a)
of the result a
of type A
or None
to indicate error.
The trade-off is that working with an Option[Double]
is different than working with a Double
. For example, suppose we define a function toDoubleNoNaNOption
that excludes different versions of "NaN"
as a string before calling toDoubleOption
:
def toDoubleNoNaNOption(s: String): Option[Double] =
// Do some work: trim the spaces from the end of s
.trim match {
s// Check for an error condition: if now the string is empty
case s if s.length == 0 => None
// Continue with some work: normalize to upper case
case s => s.toUpperCase match {
// Check for an error condition: the trimmed and upper-cased string is "NAN"
case s if s == "NAN" => None
// Continue with some work: convert to an Option[Double]
case s => toDoubleOption(s)
}
}
toDoubleNoNaNOption("nan")
toDoubleNoNaNOption(" nan ")
toDoubleNoNaNOption("NaN")
toDoubleNoNaNOption(" NaN ")
defined function toDoubleNoNaNOption res6_1: Option[Double] = None res6_2: Option[Double] = None res6_3: Option[Double] = None res6_4: Option[Double] = None
Or as another example, suppose we define a function addToDoubleOption
to convert two floating-point strings excluding "NaN"
and then add them:
def addToDoubleOption(s1: String, s2: String): Option[Double] =
toDoubleNoNaNOption(s1) match {
// If we get None, then we return None indicating error.
case None => None
// If we get Some, then we can continue to do work.
case Some(d1) => toDoubleNoNaNOption(s2) match {
// If we get None, then we return None indicating error.
case None => None
// If we get Some, then we can continue to do work.
case Some(d2) => Some(d1 + d2)
}
}
addToDoubleOption("1", "4.2")
addToDoubleOption("1", "hello")
addToDoubleOption("1", " nan")
defined function addToDoubleOption res7_1: Option[Double] = Some(value = 5.2) res7_2: Option[Double] = None res7_3: Option[Double] = None
This works well in carefully avoiding errors with support from the Scala type checker to make sure we check for None
. At the same time, there is a lot of None
handling scaffolding mixed in with the “work”.
Now, let’s think of Option[A]
as zero-or-one element list. That is, None
is the zero element list and Some(a)
is the one element list with an a: A
. Now, we rewrite these two functions using the higher-order methods that we are used to using on lists:
def toDoubleNoNaNOption(s: String): Option[Double] =
Some(s)
// Do some work: trim the spaces from the end of s
.map(_.trim)
// Check for an error condition: if now the string is empty
.filter(_.length != 0)
// Continue with some work: normalize to upper case
.map(_.toUpperCase)
// Check for an error condition: the trimmed and upper-cased string is "NAN"
.filter(_ != "NAN")
// Continue with some work: convert to an Option[Double]
.flatMap(toDoubleOption(_))
defined function toDoubleNoNaNOption
def addToDoubleOption(s1: String, s2: String): Option[Double] =
toDoubleNoNaNOption(s1) flatMap { d1 =>
// If we get Some, then we can continue to do work.
toDoubleNoNaNOption(s2) map { d2 =>
// If we get Some, then we can continue to do work.
+ d2
d1 }
}
addToDoubleOption("1", "4.2")
addToDoubleOption("1", "hello")
addToDoubleOption("1", " nan")
defined function addToDoubleOption res9_1: Option[Double] = Some(value = 5.2) res9_2: Option[Double] = None res9_3: Option[Double] = None
Wow, the None
scaffolding is gone! The None
handling scaffolding is precisely what is factored into the map
, filter
, and flatMap
library methods. It is a good exercise to define these library methods:
Exercise 30.1 Implement map
for Option[A]
s:
def map[A, B](opt: Option[A])(f: A => B): Option[B] = ???
defined function map
Exercise 30.2 Implement filter
for Option[A]
s:
def filter[A](opt: Option[A])(f: A => Boolean): Option[A] = ???
defined function filter
Exercise 30.3 Implement flatMap
for Option[A]
s:
def flatMap[A, B](opt: Option[A])(f: A => Option[B]): Option[B] = ???
defined function flatMap
Note the pattern of using a map
nested in a flatMap
is so common that a for
-yield
expression with multiple binders translates to exactly that. So for example, we can define addToDoubleOption
as follows:
def addToDoubleOption(s1: String, s2: String): Option[Double] =
for {
<- toDoubleNoNaNOption(s1)
d1 <- toDoubleNoNaNOption(s2)
d2 } yield d1 + d2
addToDoubleOption("1", "4.2")
addToDoubleOption("1", "hello")
addToDoubleOption("1", " nan")
defined function addToDoubleOption res13_1: Option[Double] = Some(value = 5.2) res13_2: Option[Double] = None res13_3: Option[Double] = None
We can see the sequence of binders <-
corresponds to a sequential composition where we get first the double d1
corresponding to s1
and second the double d2
corresponding to s2
to add them. If either step errors (i.e., results in a None
), then the whole for
-yield
expression evaluates to None
.
In either case of using flatMap
and map
or the for
-yield
expressions, we have mostly recovered the minimal scaffolding from effect-ful exceptions while being effect-free with explicit Option
s.
We have seen that Either[Err, A]
is another data type that is often used for representing error effects where the error-case has some data of type Err
. For example, we can use an Either
to save the exception in the error case:
def toDoubleEither(s: String): Either[NumberFormatException, Double] =
try { Right(s.toDouble) } catch { case e: NumberFormatException => Left(e) }
defined function toDoubleEither
And observe the code that uses map
and flatMap
works with either type:
def addToDoubleEither(s1: String, s2: String): Either[NumberFormatException, Double] =
toDoubleEither(s1) flatMap { d1 =>
toDoubleEither(s2) map { d2 =>
+ d2
d1 }
}
addToDoubleEither("1", "4.2")
addToDoubleEither("1", "hello")
defined function addToDoubleEither res15_1: Either[NumberFormatException, Double] = Right(value = 5.2) res15_2: Either[NumberFormatException, Double] = Left( value = java.lang.NumberFormatException: For input string: "hello" )
def addToDoubleEither(s1: String, s2: String): Either[NumberFormatException, Double] =
for {
<- toDoubleEither(s1)
d1 <- toDoubleEither(s2)
d2 } yield d1 + d2
addToDoubleEither("1", "4.2")
addToDoubleEither("1", "hello")
defined function addToDoubleEither res16_1: Either[NumberFormatException, Double] = Right(value = 5.2) res16_2: Either[NumberFormatException, Double] = Left( value = java.lang.NumberFormatException: For input string: "hello" )
The Scala standard library has a data type Try
specifically for representing exception effects:
import scala.util.Try
def toDoubleTry(s: String): Try[Double] =
Try(s.toDouble)
import scala.util.Try defined function toDoubleTry
def addToDoubleTry(s1: String, s2: String): Try[Double] =
toDoubleTry(s1) flatMap { d1 =>
// If we get Success, then we can continue to do work.
toDoubleTry(s2) map { d2 =>
// If we get Success, then we can continue to do work.
+ d2
d1 }
}
addToDoubleTry("1", "4.2")
addToDoubleTry("1", "hello")
defined function addToDoubleTry res18_1: Try[Double] = Success(value = 5.2) res18_2: Try[Double] = Failure( exception = java.lang.NumberFormatException: For input string: "hello" )
def addToDoubleTry(s1: String, s2: String): Try[Double] =
for {
<- toDoubleTry(s1)
d1 <- toDoubleTry(s2)
d2 } yield d1 + d2
addToDoubleTry("1", "4.2")
addToDoubleTry("1", "hello")
defined function addToDoubleTry res19_1: Try[Double] = Success(value = 5.2) res19_2: Try[Double] = Failure( exception = java.lang.NumberFormatException: For input string: "hello" )
Another kind of effect is computation that is non-deterministic:
val rand = new scala.util.Random(0)
val r1 = rand.between(1,10)
val r2 = rand.between(1,10)
val r3 = rand.between(1,10)
val r4 = rand.between(1,10)
rand: scala.util.Random = scala.util.Random@3651e44b r1: Int = 7 r2: Int = 8 r3: Int = 5 r4: Int = 3
While we do not necessarily see computations that return List
as representing effects, we can represent non-determinism effects with a sequence:
val r = List(r1, r2, r3, r4)
r: List[Int] = List(7, 8, 5, 3)
And thus computations on top of non-deterministic computations correspond to applying List
list methods. For example, let us show all pairs of results:
for {
<- r
i <- r
j } yield (i, j)
res22: List[(Int, Int)] = List( (7, 7), (7, 8), (7, 5), (7, 3), (8, 7), (8, 8), (8, 5), (8, 3), (5, 7), (5, 8), (5, 5), (5, 3), (3, 7), (3, 8), (3, 5), (3, 3) )
The hallmark of imperative computation is mutation (or sometimes called assignment or imperative update). For example, suppose we define a function freshVarImperative
that creates a globally unique variable name by keeping a counter:
var counter: Int = 0
def freshVarImperative: String = {
val x = s"x${counter}"
+= 1
counter
x}
val x0 = freshVarImperative
val x1 = freshVarImperative
val x2 = freshVarImperative
counter: Int = 3
defined function freshVarImperative
x0: String = "x0"
x1: String = "x1"
x2: String = "x2"
To represent a mutation effect, we see that what freshVar
needs is the current counter that we view as input-output state of the freshVar
function:
def freshVar: Int => (Int, String) = { counter =>
val x = s"x${counter}"
(counter + 1, x)
}
val counter = 0
val (counter_, x0) = freshVar(counter)
val (counter__, x1) = freshVar(counter_)
val (counter___, x2) = freshVar(counter__)
defined function freshVar counter: Int = 0 counter_: Int = 1 x0: String = "x0" counter__: Int = 2 x1: String = "x1" counter___: Int = 3 x2: String = "x2"
We see the Int
as the input-out state where a call to freshVar
“updates” the state. Here, the “state” is the Int
representing the next available variable number. The contract of the freshVar
function is that counter
on input is the next available variable number to return the next variable name s"x${counter}"
. It also returns counter + 1
that is the next available variable number, conceptually “allocating” the next variable number.
In the imperative version freshVarImperative
, we have to be careful about what code mutates counter
. However, in the functional version freshVar
, we have to be careful to thread the right version of counter
.
We can improve this slightly with careful use of shadowing counter
:
val counter = 0
val (x0, x1, x2) = freshVar(counter) match {
case (counter, x0) => freshVar(counter) match {
case (counter, x1) => freshVar(counter) match {
case (counter, x2) => (x0, x1, x2)
}
}
}
counter: Int = 0 x0: String = "x0" x1: String = "x1" x2: String = "x2"
However, it is still arguably messy.
When there is repeated boilerplate that would be error-prone to get right each time, good engineers will implement a library so that they can type less boilerplate and more importantly never get it wrong.
Thus, a seemingly crazy idea is to ask, “Can we abstract a generic state-transforming function S => (S, A)
as a collection-like data type?” Let us call this data type a DoWith[S, A]
:
type DoWith[S, A] = S => (S, A)
defined type DoWith
which is a function that returns a result of type A
while computing “with” a state type S
.
Observe that freshVar
is a function of type DoWith[Int, String]
:
: DoWith[Int, String] freshVar
res27: Int => (Int, String) = ammonite.$sess.cmd24$Helper$$Lambda$2351/0x0000000800bd3840@270c20c5
Let us see a DoWith[S, A]
like a data type that stores a way to compute an A
using an input-output state of type S
. We see it is like a collection similar to Option[A]
, List[A]
, or Either[Err, A]
in that we can define map
:
def map[S, A, B](doer: DoWith[S, A])(f: A => B): DoWith[S, B] = { (s: S) =>
val (s_, a) = doer(s)
(s_, f(a))
}
defined function map
This map
function transforms a DoWith[S, A]
to a DoWith[S, B]
using a callback function f: A => B
. We see that the implementation of this generic function is to create a function that when called with an s: S
, applies doer
to get an updated state s_
and an a: A
value to then call f(a)
to get a value of type B
to return with the updated state s_
. This function implements that careful threading of state that we saw above with counter
and freshVar
.
And we can similarly implement a flatMap
:
def flatMap[S, A, B](doer: DoWith[S, A])(f: A => DoWith[S, B]): DoWith[S, B] = { (s: S) =>
val (s_, a) = doer(s)
f(a)(s_)
}
defined function flatMap
that carefully threads the state values s
and s_
of type S
.
We can then use flatMap
and map
to create a function that threads the state of counter
through calls to freshVar
that is then called with the initial counter-state of 0
.
val counter = 0
val (counter___, (x0, x1, x2)) =
(flatMap(freshVar) { x0 =>
flatMap(freshVar) { x1 =>
map(freshVar) { x2 =>
(x0, x1, x2)
}
}
})(counter)
counter: Int = 0 counter___: Int = 3 x0: String = "x0" x1: String = "x1" x2: String = "x2"
Let us now implement a library class DoWith[S, A]
that encapsulates a function of type S => (S, A)
with map
and flatMap
methods following the above:
sealed class DoWith[S, A] private (doer: S => (S, A)) {
def map[B](f: A => B): DoWith[S, B] = new DoWith[S, B]({
(s: S) => {
val (s_, a) = doer(s)
(s_, f(a))
}
})
def flatMap[B](f: A => DoWith[S, B]): DoWith[S, B] = new DoWith[S, B]({
(s: S) => {
val (s_, a) = doer(s)
f(a)(s_)
}
})
def apply(s: S): (S, A) = doer(s)
}
object DoWith {
def doget[S]: DoWith[S, S] = new DoWith[S, S]({ s => (s, s) })
def doput[S](s: S): DoWith[S, Unit] = new DoWith[S, Unit]({ _ => (s, ()) })
def doreturn[S, A](a: A): DoWith[S, A] = new DoWith[S, A]({ s => (s, a) })
def domodify[S](f: S => S): DoWith[S, Unit] = new DoWith[S, Unit]({ s => (f(s), ()) })
}
import DoWith._
defined class DoWith defined object DoWith import DoWith._
In the above definition of the DoWith[S, A]
library class, we encapsulate the state-transforming function doer: S => (S, A)
. We go one step further in preventing implementation errors by requiring the client create DoWith[S, A]
objects using only doget
, doput
, doreturn
, domodify
, map
, and flatMap
. That is, the client cannot directly construct DoWith[S, A]
objects with new
because the constructor is marked private
but instead has to use one of those six methods.
The doget[S]
method constructs a DoWith[S, S]
that makes the “current” state the result (i.e., s => (s, s)
). Intuitively, it “gets” the state.
The doput[S](s: S)
method constructs a DoWith[S, Unit]
that makes the given state s
the “current” state (i.e., _ => (s, ())
). Intuitively, it “puts” s
into the state.
The doreturn[S, A](a: A)
method constructs a DoWith[S, A]
that leaves the “current” state as-is and returns the given result a
(i.e., s => (s, a)
). It technically does not need to be given in the library, as it can be defined in terms of doget
and map
.
The domodify[S](f: S => S)
method constructs a DoWith[S, Unit]
that “modifies” the state using the given function f: S => S
(i.e., s => (f(s), ())
). It technically does not need to be given in the library, as it can be defined in terms of doget
, doput
, and flatMap
.
Let us now define freshVar
as a DoWith[Int, String]
:
def freshVar: DoWith[Int, String] = doget flatMap { counter =>
doput(counter + 1) map { _ => s"x${counter}" }
}
freshVar(0)
defined function freshVar res32_1: (Int, String) = (1, "x0")
Or we can use for
-yield
expressions:
def freshVar: DoWith[Int, String] =
for {
<- doget
counter <- doput(counter + 1)
_ } yield s"x${counter}"
freshVar(0)
defined function freshVar res33_1: (Int, String) = (1, "x0")
And we can get our three fresh variables:
val counter = 0
val (counter___, (x0, x1, x2)) =
(freshVar flatMap { x0 =>
{ x1 =>
freshVar flatMap { x2 =>
freshVar map (x0, x1, x2)
}
}
})(counter)
counter: Int = 0 counter___: Int = 3 x0: String = "x0" x1: String = "x1" x2: String = "x2"
val counter = 0
val (counter___, (x0, x1, x2)) =
(for {
<- freshVar
x0 <- freshVar
x1 <- freshVar
x2 } yield (x0, x1, x2))(counter)
counter: Int = 0 counter___: Int = 3 x0: String = "x0" x1: String = "x1" x2: String = "x2"
Note that DoWith[S, A]
is often called State[S, A]
(e.g., in the Scala Cats library).
Data types Option[A]
, Either[Err,A]
, Try[A]
, List[A]
, and DoWith[S, A]
are similar in that they all have a flatMap
method. Having a flatMap
method corresponds to being able to sequentially compose them:
def getOption: Option[Int] = Some(1)
def getEither: Either[String, Int] = Right(2)
def getTry: Try[Int] = Try(3)
def getList: List[Int] = List(4, 5)
def getDoWith: DoWith[String, Int] = doreturn(6)
for { i1 <- getOption; i2 <- getOption } yield (i1, i2)
for { i1 <- getEither; i2 <- getEither } yield (i1, i2)
for { i1 <- getTry; i2 <- getTry } yield (i1, i2)
for { i1 <- getList; i2 <- getList } yield (i1, i2)
val doer = for { i1 <- getDoWith; i2 <- getDoWith } yield (i1, i2)
doer("")
defined function getOption defined function getEither defined function getTry defined function getList defined function getDoWith res36_5: Option[(Int, Int)] = Some(value = (1, 1)) res36_6: Either[String, (Int, Int)] = Right(value = (2, 2)) res36_7: Try[(Int, Int)] = Success(value = (3, 3)) res36_8: List[(Int, Int)] = List((4, 4), (4, 5), (5, 4), (5, 5)) doer: DoWith[String, (Int, Int)] = ammonite.$sess.cmd31$Helper$DoWith@2cbfa83f res36_10: (String, (Int, Int)) = ("", (6, 6))
A type constructor M
for a parametrized data type M[A]
that has a flatMap
method, as well as method to construct a M[A]
from an A
is called a monad. We see that Option[_]
, Either[Err,_]
, Try[_]
, List[_]
, and DoWith[S,_]
are monads where we write _
for the parametrized type.
Unfortunately, there are lots of confusing descriptions of monads out there. For our purposes, the essence is simply observing that it is a design pattern for data types. Defining a flatMap
method
class M[A] {
def flatMap[B](f: A => M[B]): M[B] = ???
}
defined class M
makes it possible to sequentially compose computations using that data type.
It is possible to take this one step further in defining an interface for a type constructor that has a monad interface:
trait Monad[M[_]] {
def flatMap[A, B](ma: M[A])(f: A => M[B]): M[B]
def pure[A](a: A): M[A]
}
defined trait Monad
The M[_]
parameter to Monad
says that M
is a type constructor with one parameter.
The pure[A]
method injects an A
into an M[A]
(e.g., Some
, Right
, Try
, List
, or doreturn
).
The following objects witness that Option
and List
satisfy the Monad
interface:
object optionMonad extends Monad[Option] {
def flatMap[A, B](opt: Option[A])(f: A => Option[B]): Option[B] = opt.flatMap(f)
def pure[A](a: A): Option[A] = Some(a)
}
object listMonad extends Monad[List] {
def flatMap[A, B](l: List[A])(f: A => List[B]): List[B] = l.flatMap(f)
def pure[A](a: A): List[A] = List(a)
}
defined object optionMonad defined object listMonad
We can now define functions that are generic over type constructors that satisfy the Monad
interface:
def cross[M[_], A, B](ma: M[A], mb: M[B])(m: Monad[M]): M[(A, B)] =
.flatMap(ma) { a => m.flatMap(mb) { b => m.pure(a, b) } }
m
[Option, Int, String](Some(1), Some("hello"))(optionMonad)
cross[List, Int, String](List(2, 3), List("hola", "bonjour"))(listMonad) cross
defined function cross res40_1: Option[(Int, String)] = Some(value = (1, "hello")) res40_2: List[(Int, String)] = List( (2, "hola"), (2, "bonjour"), (3, "hola"), (3, "bonjour") )
For a type constructor to be a proper monad, the flatMap
and pure
should be satisfy some expected consistency conditions (cf. monad laws). In particular, we see that pure
is a kind of a no-op, so we should be able to remove it in a flatMap
sequence without changing the result. And since flatMap
as a kind of sequencing operator, so we should be able to change the grouping of the sequence without changing the result.
In other contexts, flatMap
is sometimes called “bind” or written as the >>=
operator, and pure
is sometimes called “return”.
It seems somewhat onerous to explicitly pass the optionMonad
or listMonad
instances to cross
when the type signature already says Option
or List
:
[Option, Int, String](Some(1), Some("hello"))(optionMonad)
cross[List, Int, String](List(2, 3), List("hola", "bonjour"))(listMonad) cross
res41_0: Option[(Int, String)] = Some(value = (1, "hello")) res41_1: List[(Int, String)] = List( (2, "hola"), (2, "bonjour"), (3, "hola"), (3, "bonjour") )
Scala does have an advanced feature to automatically pass particular given values of particular types (cf. contextual parameters). In particular, we expect optionMonad
and listMonad
to be the only instances of Monad[Option]
and Monad[List]
that would make sense, respectively. With contextual parameters, we can instruct the Scala compiler to pass either optionMonad
or listMonad
whenever an instance of type Monad[Option]
or Monad[List]
is needed, respectively:
trait Monad[M[_]] {
def flatMap[A, B](ma: M[A])(f: A => M[B]): M[B]
def pure[A](a: A): M[A]
}
implicit object optionMonad extends Monad[Option] {
def flatMap[A, B](opt: Option[A])(f: A => Option[B]): Option[B] = opt.flatMap(f)
def pure[A](a: A): Option[A] = Some(a)
}
implicit object listMonad extends Monad[List] {
def flatMap[A, B](l: List[A])(f: A => List[B]): List[B] = l.flatMap(f)
def pure[A](a: A): List[A] = List(a)
}
defined trait Monad defined object optionMonad defined object listMonad
The implicit
keyword states that optionMonad
and listMonad
are these canonical instances of type Monad[Option]
and Monad[List]
, respectively.
We then state that cross
takes in a parameter of type Monad[M]
implicitly:
def cross[M[_]: Monad, A, B](ma: M[A], mb: M[B]): M[(A, B)] = {
val m = implicitly[Monad[M]]
.flatMap(ma) { a => m.flatMap(mb) { b => m.pure(a, b) } }
m}
defined function cross
The M[_]: Monad
declaration says M
has a canonical Monad[M]
instance that should be passed as an argument to cross
. The method call to implicitly[Monad[M]]
gets that implicit parameter (though it is also possible to explicitly declare m
as an implicit parameter of cross
).
The result is that we can call cross
without explicitly passing the optionMonad
or listMonad
instances:
[Option, Int, String](Some(1), Some("hello"))
cross[List, Int, String](List(2, 3), List("hola", "bonjour")) cross
res44_0: Option[(Int, String)] = Some(value = (1, "hello")) res44_1: List[(Int, String)] = List( (2, "hola"), (2, "bonjour"), (3, "hola"), (3, "bonjour") )
Note that the implicit
keyword is, unfortunately, overloaded for many things in Scala 2. This language feature has been significantly revised and improved on in Scala 3.