Daniel Westheide

on making software

Introducing Kontextfrei

For the past 15 months, I have been working on a new library on and off. So far, I have been mostly silent about it, because I didn’t feel like it was ready for a wider audience to use – even though we had been using it successfully in production for a while. However, since I broke my silence as long ago as April this year, when I did a talk about it at this year’s ScalarConf in Warsaw, a blog post is overdue in which I explain what this library does and why I set out to write it in the first place.

Last year, I was involved in a project that required my team to implement a few Spark applications. For most of them, the business logic was rather complex, so we tried to implement this business logic in a test-driven way, using property-driven tests.

The pain of unit-testing Spark applications

At first glance, it looks like this is a great match. When it comes down to it, a Spark application consists of IO stages (reading from and writing to data sources) and transformations of data sets. The latter constitute our business logic and are relatively easy to separate from the IO parts. They are mostly built from pure functions. Functions like these are usually a perfect fit for test-driven development as well as for property-based testing.

However, all was not great. It may be old news to you if you have been working with Apache Spark for a while, but it turns out that writing real unit tests is not actually supported that well by Spark, and as a result, it can be quite painful. The thing is that in order to create an RDD, we always need a SparkContext, and the most light-weight mechanism for getting one is to create a local SparkContext. Creating a local SparkContext means that we start up a server, which takes a few seconds, and testing our properties with lots of different generated input data takes a really long time. Most certainly, we are losing the fast feedback loop we are used to from developing web applications, for example.

Abstracting over RDDs with kontextfrei

Now, we could confine ourselves to only unit-testing the functions that we pass to RDD operators, so that our unit tests do not have any dependency on Spark and can be verified as quickly as we are used to. However, this leaves quite a lot of business logic uncovered. Instead, at a Scala hackathon last May, I started to experiment with the idea of abstracting over Spark’s RDD, and kontextfrei was born.

The idea is the following: By abstracting over RDD, we can write business logic that has no dependency on the RDD type. This means that we can also write test properties that are Spark-agnostic. Any Spark-agnostic code like this can either be executed on an RDD (which you would do in your actual Spark application and in your integration tests), or on a local and fast Scala collection (which is really great for unit tests that you continously run locally during development).

Obtaining the library

It’s probably easier to show how this works than to describe it with words alone, so let’s look at a really minimalistic example, the traditional word count. First, we need to add the necessary dependencies to our SBT build file. Kontextfrei consists of two different modules, kontextfrei-core and kontextfrei-scalatest. The former is what you need to abstract over RDD in your main code base, the former to get some additional support for writing your RDD-independent tests using ScalaTest with ScalaCheck. Let’s add them to our build.sbt file, together with the usual Spark dependency you would need anyway:

1
2
3
4
resolvers += "dwestheide" at "https://dl.bintray.com/dwestheide/maven"
libraryDependencies += "com.danielwestheide" %% "kontextfrei-core-spark-2.2.0" % "0.6.0"
libraryDependencies += "com.danielwestheide" %% "kontextfrei-scalatest-spark-2.2.0" % "0.6.0" % "test,it"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0"

Please note that in this simple example, we create a Spark application that you can execute in a self-contained way. In the real world, you would add spark-core as a provided dependency and create an assembly JAR that you pass to spark-submit.

Implementing the business logic

Now, let’s see how we can implement the business logic of our word count application using kontextfrei. In our example, we define all of our business logic in a trait called WordCount:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
package com.danielwestheide.kontextfrei.wordcount

import com.danielwestheide.kontextfrei.DCollectionOps
import com.danielwestheide.kontextfrei.syntax.SyntaxSupport

trait WordCount extends SyntaxSupport {

  def counts[F[_]: DCollectionOps](text: F[String]): F[(String, Long)] =
    text
      .flatMap(line => line.split(" "))
      .map(word => (word, 1L))
      .reduceByKey(_ + _)
      .sortBy(_._2, ascending = false)

  def formatted[F[_]: DCollectionOps](counts: F[(String, Long)]): F[String] =
    counts.map {
      case (word, count) => s"$word,$count"
    }
}

The first thing you’ll notice is that the implementations of counts and formatted look exactly the same as they would if you were programming against Spark’s RDD type. You could literally copy and paste RDD-based code into a program written with kontextfrei.

The second thing you notice is that the method signatures of counts and formatted contain a type constructor, declared as F[_], which is constrained by a context bound: For any concrete type constructor we pass in here, there must be an instance of kontextfrei’s DCollectionOps typeclass. In our business logic, we do not care what concrete type constructor is used for F, as long as the operations defined in DCollectionOps are supported for it. This way, we are liberating our business logic from any dependency on Spark, and specifically on the annoying SparkContext.

In order to be able to use the familiar syntax we know from the RDD type, we mix in kontextfrei’s SyntaxSupport trait, but you could just as well use an import instead, if that’s more to your liking.

Plugging our business logic into the Spark application

At the end of the day, we want to be able to have a runnable Spark application. In order to achieve that, we must plug our Spark-agnostic business logic together with the Spark-dependent IO parts of our application. Here is what this looks like in our word count example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
package com.danielwestheide.kontextfrei.wordcount

import com.danielwestheide.kontextfrei.rdd.RDDOpsSupport
import org.apache.spark.SparkContext

object Main extends App with WordCount with RDDOpsSupport {

  implicit val sparkContext: SparkContext =
    new SparkContext("local[1]", "word-count")

  val inputFilePath = args(0)
  val outputFilePath = args(1)

  try {

    val textFile   = sparkContext.textFile(inputFilePath, minPartitions = 2)
    val wordCounts = counts(textFile)
    formatted(wordCounts).saveAsTextFile(outputFilePath)

  } finally {
    sparkContext.stop()
  }
}

Our Main object mixes in our WordCount trait as well as kontextfrei’s RDDOpsSupport, which proves to the compiler that we have an instance of the DCollectionOps typeclass for the RDD type constructor. In order to prove this, we also need an implicit SparkContext. Again, instead of mixing in this trait, we can also use an import.

Now, our Main object is all about doing some IO and integrating our business logi into it.

Writing Spark-agnostic tests

So far so good. We have liberated our business logic from any dependency on Spark, but what do we gain from this? Well, now we are able to write our unit tests in a Spark-agnostic way as well. First, we define a BaseSpec which inherits from kontextfrei’s KontextfreiSpec and mixes in a few other goodies from kontextfrei-scalatest and from ScalaTest itself:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
package com.danielwestheide.kontextfrei.wordcount

import com.danielwestheide.kontextfrei.scalatest.KontextfreiSpec
import com.danielwestheide.kontextfrei.syntax.DistributionSyntaxSupport
import org.scalactic.anyvals.PosInt
import org.scalatest.prop.GeneratorDrivenPropertyChecks
import org.scalatest.{MustMatchers, PropSpecLike}

trait BaseSpec[F[_]]
    extends KontextfreiSpec[F]
    with DistributionSyntaxSupport
    with PropSpecLike
    with GeneratorDrivenPropertyChecks
    with MustMatchers {

  implicit val config: PropertyCheckConfiguration =
    PropertyCheckConfiguration(minSuccessful = PosInt(100))
}

BaseSpec, like our WordCount trait, takes a type constructor, which it simply passes along to the KontextfreiSpec trait. We will get back to that one in a minute.

Our actual test properties can now be implemented for any type constructor F[_] for which there is an instance of DCollectionOps. We define them in a trait WordCountProperties, which also has to be parameterized by a type constructor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
package com.danielwestheide.kontextfrei.wordcount

trait WordCountProperties[F[_]] extends BaseSpec[F] with WordCount {

  import collection.immutable._

  property("sums word counts across lines") {
    forAll { (wordA: String) =>
      whenever(wordA.nonEmpty) {
        val wordB = wordA.reverse + wordA
        val result =
          counts(Seq(s"$wordB $wordA $wordB", wordB).distributed)
            .collectAsMap()
        assert(result(wordB) === 3)
      }
    }
  }

  property("does not have duplicate keys") {
    forAll { (wordA: String) =>
      whenever(wordA.nonEmpty) {
        val wordB = wordA.reverse + wordA
        val result =
          counts(Seq(s"$wordA $wordB", s"$wordB $wordA").distributed)
        assert(
          result.keys.distinct().collect().toList === result.keys
            .collect()
            .toList)
      }
    }
  }

}

We want to be able to test our Spark-agnostic properties both against fast Scala collections as well as against RDDs in a local Spark cluster. To get there, we will need to define two test classes, one in the test sources directory, the other one in the it sources directory. Here is the unit test:

1
2
3
4
5
6
7
package com.danielwestheide.kontextfrei.wordcount

import com.danielwestheide.kontextfrei.scalatest.StreamSpec

class WordCountSpec extends BaseSpec[Stream]
  with StreamSpec
  with WordCountProperties[Stream]

We mix in BaseSpec and pass it the Stream type constructor. Stream has the same shape as RDD, but it is a Scala collection. The KontextfreiSpec trait extended by BaseSpec defines an abstract implicit DCollectionOps for its type constructor. By mixing in StreamSpec, we get an instance of DCollectionOps for Stream. When we implement our business logic, we can run the WordCountSpec test and get instantaneous feedback. We can use SBT’s triggered execution and have it run our unit tests upon every detected source change, using ~test, and it will be really fast.

In order to make sure that none of the typical bugs that you would only notice in a Spark cluster have sneaked in, we also define an integration test, which tests exactly the same properties:

1
2
3
4
5
6
7
8
package com.danielwestheide.kontextfrei.wordcount

import com.danielwestheide.kontextfrei.scalatest.RDDSpec
import org.apache.spark.rdd.RDD

class WordCountIntegrationSpec extends BaseSpec[RDD]
  with RDDSpec
  with WordCountProperties[RDD]

This time, we mix in RDDSpec because we pass parameterize BaseSpec with the RDD type constructor.

Design goals

It was an explicit design goal to stick to the existing Spark API as closely as possible, allowing people with existing Spark code bases to switch to kontextfrei as smoothly as possible, or even to migrate parts of their application without too much hassle, with the benefit of now being able to cover their business logic with missing tests without the usual pain.

An alternative to this, of course, would have been to build this library based on the ever popular interpreter pattern. To be honest, I wish Spark itself was using this pattern – other libraries like Apache Crunch have shown successfully that this can help tremendously with enabling developers to write tests for the business logic of their applications. If Spark was built on those very principles, there wouldn’t ne any reason for kontextfrei to exist at all.

Limitations

kontextfrei is still a young library, and while we have been using it in production in one project, I do not know of any other adopters. One if its limitations is that it doesn’t yet support all operations defined on the RDD type – but we are getting closer. In addition, I have yet to find a clever way to support broadcast variables and accumulators. And of course, who is using RDDs anyway in 2017? While I do think that there is still room for RDD-based Spark applications, I am aware that many people have long moved on to Datasets and to Spark Streaming. It would be nice to create a similar typeclass-based abstraction for datasets and for streaming applications, but I haven’t had the time to look deeper into what would be necessary to implement either of those.

Summary

kontextfrei is a Scala library that aims to provide developers with a faster feedback loop when developing Apache Spark applications. To achieve that, it enables you to write the business logic of your Spark application, as well as your test code, against an abstraction over Spark’s RDD.

I would love to hear your thoughts on this approach. Do you think it’s worth it defining the biggest typeclass ever and reimplementing the RDD logic for Scala collections for test purposes? Please, if this looks interesting, do try it out. I am always interested in feedback and in contributions of all kind.

Links

The Empathic Programmer

In 1999, Andrew Hunt and Dave Thomas, in their seminal book, demanded that programmers be pragmatic. Ten years later, Chad Fowler, in his excellent book on career development, asked programmers to be passionate. Even today, I still consider a lot of the advice in both of these books to be incredibly valuable, especially Fowler’s book that helped me a lot, personally.

Nevertheless, in recent years, I have witnessed again and again that one other quality in programmers is at least as important and that it hasn’t even seen a fraction of the attention it deserves. The programmer we should all strive to be is the empathic programmer. Of course, I am not the only one, let alone the first one, to realize that. For starters, in my bubble, Benjamin Reitzimmer wrote an excellent post about what he considers to be important qualities of a mature developer a while ago, and empathy is one of them. I consider a lack of empathy to be the root cause for some of the biggest problems in our industry and in the tech community. In this post, I want to share some observations on how a lack of empathy leads to problems. Consider it a call to strive for more empathy.

So what is empathy? Here is a definition from Merriam-Webster:

the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another of either the past or present without having the feelings, thoughts, and experience fully communicated in an objectively explicit manner; also: the capacity for this

Empathy at the workplace

It shouldn’t come as a surprise that the ability to show empathy can come in handy in any kind of job that involves working with other people, including the job as a programmer. This is true even if you work remotely – the other messages you see in your Slack channels are not all coming from bots. There are actual human beings behind them.

One of the situations where we often forget to think about that is code reviews. Just writing down what is wrong with a pull request without thinking about tone can easily lead to the creator of the pull request feeling personally offended. April Wensel has some good advice on code reviews. What’s crucial is to develop some sensitivity for how your words will be perceived by the receiver, which requires to put yourself into their shoes, see through their eyes and reflect how they will feel. This is easier the better you know the person, otherwise you will have to make some assumptions, but I think that’s still far better than not reflecting at all on how the other person will feel.

Another workplace situation where I have often seen a lack of empathy is when members of two different teams need to collaborate to solve a problem or get a feature done. In some companies, I have seen an odd, competitive “us versus them” attitude between teams. This phenomenon has been explored by social and evolutionary psychologists, and while such a behaviour might still be in our nature, that doesn’t mean that we cannot try to overcome it. A variant of “us versus them” is “developers versus managers”. We developers have a hard time understanding why managers do what they do, but frankly, often, we don’t try very hard. I have often seen developers taking on a very defensive stance against managers, and of course, the relationship between managers and developers in these cases was rather chilly. Getting to know “the other side” would certainly help to empathize with managers. Understanding why they act in a specific way is absolutely necessary in order to get to a healthy relationship with them.

Empathy in the tech community

Empathy is not only important at your workplace, but also very much so when you are interacting with others in our community, be it on mailing lists, conferences, or when communicating with users of your open source library, or developers of an open source library you are using. In some of these situations, a lack of empathy can strengthen exclusion, ultimately leading to a closed community that is perceived as elitist and arrogant.

As a developer using an open source library, empathize with the developers of the library before you start complaining about a bug, or better yet, a missing feature. Sam Halliday wrote an interesting post called The Open Source Entitlement Complex. It’s hard to believe, but apparently, many users of open source libraries have this attitude that the developers of these libraries are some kind of service provider, happily working for free to do exactly what you want. This is not how it works. The same way that wording and tone are important in code reviews, try to empathize with the developers who spend their free time on this library you use. Serving you and helping you out because you didn’t read the documentation is probably not their highest priority in life, so don’t treat them as if it is.

On the other hand, when presenting your open source library to potential users, consider how these people will feel about that presentation. Does it make them feel respected? Does it make them feel welcome? I am sorry to disappoint you, but I think that a foo bar “for reasonable people” does not have that effect. Personally, I find this to be very condescending and think it will intimidate a lot of people and turn them away. It implies that any other way than yours is not reasonable, and that, hence, people who have not used your library yet, but some different approach, are unreasonable people. As library authors, let’s show some empathy as well towards our potential users. As always in tech, there is no silver bullet, and there are trade-offs. There are probably perfectly good reasons why someone has been using a different library so far, and maybe even after looking at your library, there will still be good reasons not to use yours. Even if you are convinced that your library is so much better, you aren’t exactly creating an open and welcoming atmosphere by basically telling people visiting your project page that they are unreasonable for using anything else.

If you are at a tech conference, and you ask women whether they are developers at the very beginning of your conversation, but don’t do the same with men, you are probably not doing that out of malignity, but because you don’t see many women at tech conferences who are actually developers. Nevertheless, to the receiver, this seemingly harmless and neutral question doesn’t come across like that at all. She has probably heard this question many times, and constantly hearing doubts about whether you are really a programmer doesn’t exactly make you feel welcome, or confident. Show some empathy when you talk to other people at tech conferences. Imagine what it would be like to constantly be doubted, for example. If you don’t see a need for being inclusive, that’s probably because you had no problem being included in the community. This likely means that you are a man, and probably white. Since most people around you are like you, chances are you don’t even know any women or other unprivileged people who are developers. The problem of being privileged is that you don’t notice it. Talk to women on conferences and let them tell you about their experiences. By showing empathy, you can create a more welcoming environment of inclusion and foster diversity.

Summary

These are my two cents about empathy, and lack thereof, in the tech community, and how it relates to inclusion and diversity. Empathy is important not only at the workplace, when interacting with co-workers, but also when we are participating in the tech community, as conference visitors, open source developers, and users of open source libraries. Only by showing empathy, we can create an inclusive and open community. Let’s try to be more aware of the effects we have on each other, and act accordingly. Thanks!

When Option Is Not Good Enough

Recently, I tweeted about an observation I have made a couple of times in various Scala projects:

Since 140 characters are not nearly enough to discuss this issue appropriately and there were a few questions around this whole topic, I decided to put my thoughts down in a blog post, providing a few examples of what I actually talking about.

Option is great, Option is good!

As Scala programmers, we can all give ourselves a pat on the back for being upright users of the Option type. Option is great! It allows us to express in types that a value may or may not be there. Option is good! It allows us to work with potentially missing values in a safe and elegant way, without having to check for presence all the time or risking those dreaded NullPointerExceptions. All hail Option!

The problem of overloaded semantics

Option is also bad! Or, more precisely, the way that this type is being used is somewhat problematic. The semantics of the Option type are pretty clear: it is about potential absence of a value. For example, if you try to get a value for a specific key from a Map, the result may or may not be there:

1
2
scala> Map("DE" -> "Germany", "FR" -> "France") get "DE"
res0: Option[String] = Some(Germany)

This usage of the Option type is consistent with its semantics. However, sometimes, we attach different or additional meaning to this type. An example of this has been summarised by @tksfz as a reply to my tweet:

Since this is such a good example, I decided to shamelessly make use of it to explain the problem. Imagine you are developing a system that allows users to search for various offers, both by retailers and private sellers. A very simplified version of the search function could look like this:

1
2
3
4
def searchOffers(
  productTitle: Option[String],
  retailer: Option[Retailer]
  ): Seq[Offer] = ???

Apparently, there are two search criteria you can provide to filter the results: the product, and the retailer offering the product.

But what does this really mean? It looks like we have attached some new semantics to the Option type. It seems that if productTitle is None, the user wants to search for all offers, regardless of the product title, which means that None has the meaning of a wildcard. However, if productTitle is a Some, is this supposed to be an exact match or does the provided search string just have to be contained in the product title?

For the retailer, the same semantics might apply if the Option is undefined. Or maybe, in this case, None actually means that the user wants to search for offers that are not provided by a professional retailer.

Who knows? As a team member seeing this code for the first time, or coming back to it after a few months, I would probably be confused.

The problem is that we are overloading the semantics of None and Some. The former is very similar to how null in Java is sometimes used with meanings that are different from simple absence.

Towards meaningful types

Luckily, we can do better in Scala. Whenever someone in your team is confused about the exact meaning of an Option in your code, this is a good indicator that you should introduce your own algebraic data type that captures the implicit semantics of your domain. In the example above, something like this would probably be clearer for the reader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
sealed trait SearchCriteria
object SearchCriteria {
  final case object MatchAll extends SearchCriteria
  final case class Contains(s: String) extends SearchCriteria
}

sealed trait RetailerCriteria
object RetailerCriteria {
  final case object AnyRetailer extends RetailerCriteria
  final case class Only(retailer: Retailer) extends RetailerCriteria
}

def searchOffers(
  product: SearchCriteria,
  retailer: RetailerCriteria
  ): Seq[Offer] = ???

Introducing your own algebraic data types here allows you to use to close the gap between the language of your domain and the one used in your code, bringing you closer to a ubiquitous language, one of the core values in domain-driven design.

In this example, any confusion about whether the product title is an exact search in the Some case or whether None is a wildcard is now eliminated. In the searchOffers implementation, we can simply use pattern matching on the SearchCriteria and RetailerCriteria, and since they are sealed traits, we will get a warning if our pattern matching is not exhaustive, i.e. it does not cover all the cases. If you want to read more about algebraic data types and sealed traits in Scala, read this excellent blog post by Noel Welsh.

Now, you might say that SearchCriteria and RetailerCriteria have the same shape as Option, apart from the fact that they are not generic. Wouldn’t it be good enough to use some type aliases then in order to get the benefits of the ubiquitous language?

You could certainly do that, but having your own algebraic data types is more future-proof. You can easily extend your algebraic data types to enable additional functionality. If we want to allow users to do an exact search for the product title, and to specify that they are only interested in articles that are not offered by retailers, the following will do the trick:

1
2
3
4
5
6
7
8
9
10
11
12
13
sealed trait SearchCriteria
object SearchCriteria {
  final case object MatchAll extends SearchCriteria
  final case class Contains(s: String) extends SearchCriteria
  final case class Exactly(s: String) extends SearchCriteria
}

sealed trait RetailerCriteria
object RetailerCriteria {
  final case object AnyRetailer extends RetailerCriteria
  final case class Only(retailer: Retailer) extends RetailerCriteria
  final case object NoRetailer extends RetailerCriteria
}

Since we are already using our own algebraic data types, no big refactorings should be necessary.

Of course, it is possible to encode the retailer criteria as an Either[Unit, Option[Retailer]], but can you tell me immediately what each of the possible cases is supposed to mean?

More examples

The previous example was mainly about Option being used in function arguments. It seems like this is rarely a good idea. Here is another example where Option is used as a field of a case class and as a return type of a function, with confusing semantics.

Imagine you are working at Nextflix, the next big platform for watching TV series online. Things being as they are, you need to block certain TV series from users located in specific countries. To do that, you could make use of a filter chain in your web application. One filter in this scenario needs to immediately return a response if the content is blocked in the user’s country, or forward to the next filter in the chain if the content is accessible from the user’s country. Here is what this could look like in Scala code:

1
2
3
4
5
6
7
8
9
10
11
12
def checkTerritory(request: Request): Option[Block] = {
  val territory = territoryFrom(request)
  if (accepted(territory) None
  else Some(Block.requestsFrom(territory))
}

def contentFilter(request: Request, next: Request => Response):  Response = {
  val blockedResponse = for {
    restriction <- checkTerritory(request)
  } yield requestBlockedResponse(restriction)
  blockedResponse getOrElse next(request)
}

If the Option returned by checkTerritory is None, that’s the happy path and we can call the next filter in the chain. If it is defined, however, we need to short-circuit and immediately return a response informing the user that the content is blocked. Since Option is success-biased and short-circuits if it is None, this doesn’t look very intuitive to me.

What if we had our own algebraic data type for the verdict on the territory that a request is from? Such a solution could look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
sealed abstract class Verdict
final case object Pass extends Verdict
final case class Block(reason: String) extends Verdict

def checkTerritory(request: Request): Verdict = {
  val territory = territoryFrom(request)
  if (accepted(territory) Pass
  else Block.requestsFrom(territory)
}

def contentFilter(request: Request, next: Request => Response):  Response = {
  checkTerritory(request) match {
    case Pass => next(request)
    case Block(reason) => requestBlockedResponse(reason)
  }
}

I find that to be a lot more readable, and again, it has the advantage that it is easier to extend this in the future with additional verdict types.

Summary

It is often possible to express a concept from your domain as an Option, or if that doesn’t work, as an Either. Nevertheless, it is sometimes better to not use these generic types. If your actual semantics are different from potential absence of values, don’t force it, as this causes unnecessary indirection when reasoning about the code and the domain. Unsurprisingly, this can easily lead to bugs. The domain logic is easier to understand if you use your own types from the ubiquitous language.

Do you have more examples from your projects where the lack of custom algebraic data types has led to bugs or code that is difficult to understand? Feel free to share those in the comments or via Twitter.

Put Your Writes Where Your Master Is: Compile-time Restriction of Slick Effect Types

When the access patterns of a service are such that there are a lot more reads than writes, a common practice for scaling it horizontally is to have the service talk to a master database only for operations that result in state changes, while all read-only operations are performed against one or more read slaves. Those read replicas are only eventually consistent with the master.

This practice is an easy solution if you don’t want to or cannot go the full CQRS way. All you need to do is maintain two separate data sources and point all your write operations to the master, and all your read operations to your slave.

However, how do you make sure that you’re actually putting your writes where your master is? It’s easy to accidentally get this wrong, and usually, you will only find out at runtime, when the read slave is denying your service to perform its writes.

Wouldn’t it be nice if you could already verify at compile-time that your operations are hitting the correct database?

With Slick 3, the latest version of Typesafe’s functional-relational mapper, this is actually very easy to do, provided you know a bit about Scala’s type system and Slick’s notion of database actions and effect types.

In this blog post, I will explain all you need to know in order to restrict the evaluation of Slick database actions based on their effect types, resulting in the elimination of yet another source of bugs.

The full example application on which the code snippets in this blog post are based is available on GitHub.

Database actions

Whether you want to query a table, insert or update a row, or update your database schema, in Slick 3, each of those things is expressed as a DBIOAction[+R, +S <: NoStream, -E <: Effect]. R is the result type of the action, and S indicates the result type for streaming results, where the super-type NoStream refers to non-streaming database actions.

For this article, those two type parameters are not of importance, however. Let’s look at the third parameter instead.

The Effect type

E is some type of Effect and describes the database action’s effect type. Slick 3 defines the following sub-types of Effect:

1
2
3
4
5
6
7
8
trait Effect
object Effect {
  trait Read extends Effect
  trait Write extends Effect
  trait Schema extends Effect
  trait Transactional extends Effect
  trait All extends Read with Write with Schema with Transactional
}

The Effect type is a so-called phantom type. This means that we never create any instances of Effect at runtime. Rather, the sole purpose of this type is to give additional information to the compiler, so that it can prevent certain error conditions before running the application.

If you use Slick’s lifted embedding syntax for creating database actions, those actions will always have the appropriate subtype of Effect. For example, if you have a table statuses, you might implement the following StatusRepository:

1
2
3
4
5
6
class StatusRepository {

  def save(status: Status) = statuses.insertOrUpdate(status)
  def forId(statusId: StatusId) = statuses.filter(_.id === statusId).result.headOption

}

Here, the type of action returned by save is automatically inferred to be DBIOAction[Int, NoStream, Write]. The action returned by forId, on the other hand, is DBIOAction[Option[Status], NoStream, Read].

Effect types of composed actions

If you compose several database actions using one of Slick’s provided combinators, the correct intersection type will automatically be inferred. For instance, a common approach for updating the state of an aggregate is to have an application service load the aggregate from a repository, perform some business logic on the aggregate, and then save the updated version back to the repository. The application service is providing the transactional boundary for changing the state of the aggregate. This may look similar to the following, hopefully with a more complex business logic than in this oversimplified example:

1
2
3
4
5
6
7
8
9
10
11
12
 def categorize(statusId: StatusId, newCategory: String) = {
  val actions = for {
    statusOpt <- statusRepository.forId(statusId)
    result <- statusOpt match {
      case Some(status) =>
        val newStatus = status.copy(category = newCategory)
        statusRepository.save(newStatus).map(_ => Right(newStatus))
      case None => DBIO.successful(Left("unknown status"))
    }
  } yield result
  actions.transactionally
}

Here, the inferred effect type of the composed action – actions.transactionally, will be:

1
DBIOAction[Either[String, Status], NoStream, Read with Write with Transactional]

Evaluating a database action

A DBIOAction merely describes what you want to do – in order to actually have it executed, you need to run it like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
class StatusReadService(database: DatabaseDef) {

  def statusesByAuthor(author: String, offset: Int, limit: Int): Future[Seq[Status]] = {
    database.run {
      statuses
        .filter(_.author === author)
        .sortBy(_.createdAt.desc)
        .drop(offset)
        .take(limit)
        .result
    }
  }
}

If you haved used Slick 3 before, you will probably be familiar with this. Let’s look at the signature of run as defined in DatabaseDef:

1
def run[R](a: DBIOAction[R, NoStream, Nothing]): Future[R]

Having an effect type of Nothing means that Slick accepts database actions with any effect type, not caring about whether it is a Write, a Read, or something else.

This is perfectly fine for a generic library. However, when you are working with a master and a slave database, it means that your code will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
class DatabaseModule {
  val masterDatabase: DatabaseDef = Database.forConfig("databases.master")
  val slaveDatabase: DatabaseDef = Database.forConfig("databases.slave")
}

class StatusReadService(database: DatabaseDef) {
  // ...
}

class StatusService(statusRepository: StatusRepository, database: DatabaseDef) {
  // ...
}

On the type level, you are not able to differentiate between your master and slave databases, so performing an undesired effect against one of your databases by accident is a very real issue.

Luckily, with its Effect type, Slick 3 gives you all the compile-time information you need to implement a restriction of effect types for your own application.

Restricting effect types

In order to restrict the effect types that are allowed for a certain database, we are going to attach a role (e.g. master or slave) to each database and associate certain privileges to each role that will then be checked when trying to run a database action – all at compile time.

Roles

The first thing we need to introduce is another phantom type called Role:

1
2
3
trait Role
trait Master extends Role
trait Slave extends Role

Like Slick’s Effect, our Role trait will never be instantiated in our application.

Privileges

Instead, Role will appear as a type parameter in yet another new phantom type:

1
2
@implicitNotFound("'${R}' database is not privileged to to perform effect '${E}'.")
trait HasPrivilege[R <: Role, E <: Effect]

The HasPrivilege phantom type is meant to provide implicit evidence that a certain role R is allowed to perform the Slick effect type E.

The @implicitNotFound annotation allows us to provide a custom error message for the case that no implicit evidence of HasPrivilege can be found, where it is required.

We are going to spell out the privileges for our two roles to the compiler like this:

1
2
3
4
5
6
type ReadWriteTransaction = Read with Write with Transactional

implicit val slaveCanRead: Slave HasPrivilege Read = null
implicit val masterCanRead: Master HasPrivilege Read = null
implicit val masterCanWrite: Master HasPrivilege Write = null
implicit val masterCanPerformTransactions: Master HasPrivilege ReadWriteTransaction = null

As you can see, while we do provide evidences in terms of implicit vals, we don’t actually create any instances of HasPrivilege. Since the code that will make use of our evidence will never work with our implicit evidences at runtime, we can safely assign null here.

Also, please not that Slave HasPrivilege Read is just another notation for HasPrivilege[Slave, Read].

Unfortunately, we have to provide an implicit evidence for every combination of effect types we want to allow.

In this example, we want to allow combining reads with writes and transactions, but it’s not allowed to combine reads and writes without also using a transaction.

Check your privileges!

Now, in order to actually restrict the database actions that can be run according to the role of the database, we need to introduce a wrapper around Slick’s DatabaseDef – as we saw earlier, the latter does not care about the type of effect, and of course, it doesn’t know anything about our roles.

Hence, we are introducing a class DB:

1
2
3
4
5
6
7
class DB[R <: Role](databaseConfiguration: DatabaseConfiguration[R]) {

  private val underlyingDatabase = databaseConfiguration.createDatabase()

  def run[A, E <: Effect](a: DBIOAction[A, NoStream, E])
                         (implicit p: R HasPrivilege E): Future[A] = underlyingDatabase.run(a)
}

Our new wrapper type has a type parameter R that specifies its role, and it creates the underlying Slick DatabaseDef from an instance of DatabaseConfiguration with the same role, which we will look at in a moment.

For now, the important thing is the run method on our DB class, which looks very similar to the Slick’s run method we saw earlier.

The crucial difference is that our run method has a second type parameter E that specifies the type of effect of our action, and that our DBIOActions effect type parameter is that E instead of just Nothing.

Moreover, our run method has a second, implicit parameter list, with evidence of HasPrivilege[R, E], i.e. that our database role R is privileged to execute the effect E.

Instead of using DatabaseDef directly, we will now always make use of our role-annotated DB.

Providing role-annotated databases

To achieve that, we introduce a type DatabaseConfiguration which, like DB, is annotated with a role:

1
2
3
4
5
6
7
8
9
10
11
12
sealed trait DatabaseConfiguration[R <: Role] {
  def createDatabase(): DatabaseDef
}

object DatabaseConfiguration {
  object Master extends DatabaseConfiguration[Master] {
    def createDatabase() = Database.forConfig("databases.master")
  }
  object Slave extends DatabaseConfiguration[Slave] {
    def createDatabase() = Database.forConfig("databases.slave")
  }
}

The DatabaseConfiguration is the one place where we interact with the untyped outside world, reading our database connection information for the respective key, databases.master or databases.slave. Hence, the only place where we can still get things wrong is in our application.conf configuration file, if we accidentally provide the wrong database host, for example.

Our database module providing the master and slave databases to our application will now look like this:

1
2
3
4
class DatabaseModule {
  val masterDatabase: DB[Master] = new DB(DatabaseConfiguration.Master)
  val slaveDatabase: DB[Slave] = new DB(DatabaseConfiguration.Slave)
}

Unlike the previous version, the masterDatabase and slaveDatabase fields are now properly typed, and our service implementations that made use of DatabaseDef before must now explicitly pick a database with the correct Role for their purposes:

1
2
3
4
5
6
class StatusService(statusRepository: StatusRepository, database: DB[Master]) {
  // ...
}
class StatusReadService(database: DB[Slave]) {
  // ...
}

Oops, I used the slave for a write…

To verify that all of this has the desired effect, let’s make our StatusService use the slave database:

1
2
3
class StatusService(statusRepository: StatusRepository, database: DB[Slave]) {
  // ...
}

When we try to compile this, we will receive this nice error message from the compiler:

1
2
'com.danielwestheide.slickeffecttypes.db.Slave' database is not privileged to
perform effect 'slick.dbio.Effect.Write'.

Plain SQL queries

For plain SQL queries, of course, Slick cannot infer any effect types automatically. Hence, if you need to fall back from lifted embedding to plain SQL queries, you have to annotate the resulting database actions explicitly:

1
2
3
4
5
6
7
def statusesByCategory(category: String, offset: Int, limit: Int): Future[Seq[Status]] = {
   val action: SqlStreamingAction[Seq[Status], Status, Read] =
     sql"""select id, created_at, author, text, category from statuses
         where category = $category
         order by created_at desc limit $limit offset $offset""".as[Status]
    database.run(action)
  }

If you don’t do provide an explicit type here, the actions’s effect type parameter will be inferred to be Effect, the super type of all effect types.

Custom effect types

Since Effect is not a sealed trait, you may introduce your own effect types and prevent certain databases from performing those effects.

For example, you may want to disallow certain expensive queries to be performed against the master database. To do that, you could introduce an effect type ExpensiveRead, and only allow slave databases to run actions with that type:

1
2
3
trait ExpensiveRead extends Read

implicit val slaveCanDoExpensiveReads: Slave HasPrivilege ExpensiveRead = null

Just as with plain SQL queries, you can annotate your database action explicitly to be of type ExpensiveRead:

1
2
3
4
5
6
7
def statusesByCategory(category: String, offset: Int, limit: Int): Future[Seq[Status]] = {
   val action: SqlStreamingAction[Seq[Status], Status, ExpensiveRead] =
     sql"""select id, created_at, author, text, category from statuses
         where category = $category
         order by created_at desc limit $limit offset $offset""".as[Status]
    database.run(action)
  }

If you accidentally use a DB[Master] in this read service, you will get a compile error.

Of course, once you have to start annotating your database action explicitly in order to benefit from these compile-time checks, you are prone to another source of errors. Failing to annotate your actions correctly may lead to the same kinds of runtime errors we wanted to prevent.

Conclusion

In this article I have shown how to use Slick’s Effect type, together with a few other phantom types, in order to have your compiler help you verify that you run your Slick database actions against the correct database. While it’s possible to use this technique with plain SQL queries and custom effect types, the greatest benefit will come in cases where you only use lifted embedding and the standard effect types.

Thanks a lot to @missingfaktor who collaborated with me on developing this technique on top of Slick 3 and gave a lot of valuable input.

The Neophyte’s Guide to Scala Part 16: Where to Go From Here

As I have already hinted at in the previous article, the Neophyte’s Guide to Scala is coming to an end. Over the last five months, we have delved into numerous language and library features of Scala, hopefully deepening your understanding of those features and their underlying ideas and concepts.

As such, I hope this guide has served you well as a supplement to whatever introductory resources you have been using to learn Scala, whether you attended the Scala course at Coursera or learned from a book. I have tried to cover all the quirks I stumbled over back when I learned the language – things that were only mentioned briefly or not covered at all in the books and tutorials available to me – and I hope that especially those explanations were of value to you.

As the series progressed, we ventured into more advanced territory, covering ideas like type classes and path-dependent types. While I could go on writing about more and more arcane features, I felt like this would go against the idea of this series, which is clearly targeted at aspiring neophytes.

Hence, I will conclude this series with some suggestions of where to go from here if you want more. Rest assured that I will continue blogging about Scala, just not within the context of this series.

How you want to continue your journey with Scala, of course, depends a lot on your individual preferences: Maybe you are now at a point where you would like to teach Scala to others, or maybe you are excited about Scala’s type system and would like to explore some of the language’s more arcane features by delving into type-level programming.

More often than not, a good way to really get comfortable with a new language and its whole ecosystem of libraries is to use it for creating something useful, i.e. a real application that is more than just toying around. Personally, I have also gained a lot from contributing to open source projects early on.

In the following, I will elaborate those four paths, which, of course, are not mutually exclusive, and provide you with numerous links to highly recommendable additional resources.

Teaching Scala

Having followed this series, you should be familiar enough with Scala to be able to teach the basics. Maybe you are in a Java or Ruby shop and want to get your coworkers excited about Scala and functional programming.

Great, then why not organize a workshop? A nice way of introducing people to a new language is to not do a talk with lots of slides, but to teach by example, introducing a language in small steps by solving tiny problems together. Active participation is key!

If that’s something you’d like to do, the Scala community has you covered. Have a look at Scala Koans, a collection of small lessons, each of which provides a problem to be solved by fixing an initially failing test. The project is inspired by the Ruby Koans project and is a really good resource for teaching the language to others in small collaborative coding sessions.

Another amazing project ideally suited for workshops or other events is Scalatron, a game in which bots fight against each other in a virtual arena. Why not teach the language by developing such a bot together in a workshop, that will then fight against the computer? Once the participants are familiar enough with the language, organize a tournament, where each participant will develop their own bot.

Mastering arcane powers

We have only seen a tiny bit of what the Scala type system allows you to do. If this small hint at the high wizardry that’s possible got you excited and you want to master the arcane powers of type-level programming, a good starting resource is the blog series Type-Level Programming in Scala by Mark Harrah.

After that, I recommend to have a look at Shapeless, a library in which Miles Sabin explores the limits of the Scala language in terms of generic and polytypic programming.

Creating something useful

Reading books, doing tutorials and toying around with a new language is all fine to get a certain understanding of it, but in order to become really comfortable with Scala and its paradigm and learn how to think the Scala way, I highly recommend that you start creating something useful with it – something that is clearly more than a toy application (this is true for learning any language, in my opinion).

By tackling a real-world problem and trying to create a useful and usable application, you will also get a good overview of the ecosystem of libraries and get a feeling for which of those can be of service to you in specific situations.

In order to find relevant libraries or get updates of ones you are interested in, you should subscribe to implicit.ly and regularly take a look at Scala projects on GitHub.

It’s all about the web

These days, most applications written in Scala will be some kind of server applications, often with a RESTful interface exposed via HTTP and a web frontend.

If the actor model for concurrency is a good fit for your use case and you hence choose to use the Akka toolkit, an excellent choice for exposing a REST API via HTTP is Spray Routing. This is a great tool if you don’t need a web frontend, or if you want to develop a single-page web application that will talk to your backend by means of a REST API.

If you need something less minimalistic, of course, Play, which is part of the Typesafe stack, is a good choice, especially if you seek something that is widely adopted and hence well supported.

Living in a concurrent world

If after our two parts on actors and Akka, you think that Akka is a good fit for your application, you will likely want to learn a lot more about it before getting serious with it.

While the Akka documentation is pretty exhaustive and thus serves well as a reference, I think that the best choice for actually learning Akka is Derek Wyatt’s book Akka Concurrency, a preliminary version of which is already available as a PDF.

Once you have got serious with Akka, you should definitely subscribe to Let It Crash, which provides you with news and advanced tips and tricks and regarding all things Akka.

If actors are not your thing and you prefer a concurrency model allowing you to leverage the composability of Futures, your library of choice is probably Twitter’s Finagle. It allows you to modularize your application as a bunch of small remote services, with support for numerous popular protocols out of the box.

Contributing

Another really great way to learn a lot about Scala quickly is to start contributing to one or more open source projects – preferably to libraries you have been using while working on your own application.

Of course, this is nothing that is specific to Scala, but still I think it deserves to be mentioned. If you have only just learned Scala and are not using it at your day job already, it’s nearly the only choice you have to learn from other, more experienced Scala developers.

It forces you to read a lot of Scala code from other people, discovering how to do things differently, possibly more idiomatically, and you can have those experienced developers review your code in pull requests.

I have found the Scala community at large to be very friendly and helpful, so don’t shy away from contributing, even if you think you’re too much of a rookie when it comes to Scala.

While some projects might have their own way of doing things, it’s certainly a good idea to study the Scala Style Guide to get familiar with common coding conventions.

Connecting

By contributing to open source projects, you have already started connecting with the Scala community. However, you may not have the time to do that, or you may prefer other ways of connecting to like-minded people.

Try finding a local Scala user group or meetup. Scala Tribes provides an overview of Scala communities across the globe, and the Scala topic at Lanyrd keeps you up-to-date on any kind of Scala-related event, from conferences to meetups.

If you don’t like connecting in meatspace, the scala-user mailing list/Google group and the Scala IRC channel on Freenode may be good alternatives.

Other resources

Regardless of which of the paths outlined above you follow, there are a few resources I would like to recommend:

Conclusion

I hope you have enjoyed this series and that I could get you excited about Scala. While this series is coming to an end, I seriously hope that it’s just the beginning of your journey through Scala land. Let me know in the comments how your journey went so far and where you think it will go from here.

The Neophyte’s Guide to Scala Part 15: Dealing With Failure in Actor Systems

In the previous part of this series, I introduced you to the second cornerstone of Scala concurrency: The actor model, which complements the model based on composable futures backed by promises. You learnt how to define and create actors, how to send messages to them and how an actor processes these messages, possibly modifying its internal state as a result or asynchronously sending a response message to the sender.

While that was hopefully enough to get you interested in the actor model for concurrency, I left out some crucial concepts you will want to know about before starting to develop actor-based applications that consist of more than a simple echo actor.

The actor model is meant to help you achieve a high level of fault tolerance. In this article, we are going to have a look at how to deal with failure in an actor-based application, which is fundamentally different from error handling in a traditional layered server architecture.

The way you deal with failure is closely linked to some core Akka concepts and to some of the elements an actor system in Akka consists of. Hence, this article will also serve as a guide to those ideas and components.

Actor hierarchies

Before going into what happens when an error occurs in one of your actors, it’s essential to introduce one crucial idea underlying the actor approach to concurrency – an idea that is the very foundation for allowing you to build fault-tolerant concurrent applications: Actors are organized in a hierarchy.

So what does this mean? First of all, it means that every single of your actors has got a parent actor, and that each actor can create child actors. Basically, you can think of an actor system as a pyramid of actors. Parent actors watch over their children, just as in real life, taking care that they get back on their feet if they stumble. You will see shortly how exactly this is done.

The guardian actor

In the previous article, we only had two different actors, a Barista actor and a Customer actor. I will not repeat their rather trivial implementations, but focus on how we created instances of these actor types:

1
2
3
4
import akka.actor.ActorSystem
val system = ActorSystem("Coffeehouse")
val barista = system.actorOf(Props[Barista], "Barista")
val customer = system.actorOf(Props(classOf[Customer], barista), "Customer")

As you can see, we create these two actors by calling the actorOf method defined on the ActorSystem type.

So what is the parent of these two actors? Is it the actor system? Not quite, but close. The actor system is not an actor itself, but it has got a so-called guardian actor that serves as the parent of all root-level user actors, i.e. actors we create by calling actorOf on our actor system.

There shouldn’t be a whole lot of actors in your system that are children of the guardian actor. It really makes more sense to have only a few top-level actors, each of them delegating most of the work to their children.

Actor paths

The hierarchical structure of an actor system becomes apparent when looking at the actor paths of the actors you create. These are basically URLs by which actors can be addressed. You can get an actor’s path by calling path on its ActorRef:

1
2
barista.path // => akka.actor.ActorPath = akka://Coffeehouse/user/Barista
customer.path // => akka.actor.ActorPath = akka://Coffeehouse/user/Customer

The akka protocol is followed by the name of our actor system, the name of the user guardian actor and, finally, the name we gave our actor when calling actorOf on the system. In the case of remote actors, running on different machines, you would additionally see a host and a port.

Actor paths can be used to look up another actor. For example, instead of requiring the barista reference in its constructor, the Customer actor could call the actorSelection method on its ActorContext, passing in a relative path to retrieve a reference to the barista:

1
context.actorSelection("../Barista")

However, while being able to look up an actor by its path can sometimes come in handy, it’s often a much better idea to pass in dependencies in the constructor, just as we did before. Too much intimate knowledge about where your dependencies are located in the actor system makes your system more susceptible to bugs, and it will be difficult to refactor later on.

An example hierarchy

To illustrate how parents watch over their children and what this has got to do with keeping your system fault-tolerant, I’m going to stick to the domain of the coffeehouse. Let’s give the Barista actor a child actor to which it can delegate some of the work involved in running a coffeehouse.

If we really were to model the work of a barista, we were likely giving them a bunch of child actors for all the various subtasks. But to keep this article focused, we have to be a little simplistic with our example.

Let’s assume that the barista has got a register. It processes transactions, printing appropriate receipts and incrementing the day’s sales so far. Here is a first version of it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import akka.actor._
object Register {
  sealed trait Article
  case object Espresso extends Article
  case object Cappuccino extends Article
  case class Transaction(article: Article)
}
class Register extends Actor {
  import Register._
  import Barista._
  var revenue = 0
  val prices = Map[Article, Int](Espresso -> 150, Cappuccino -> 250)
  def receive = {
    case Transaction(article) =>
      val price = prices(article)
      sender ! createReceipt(price)
      revenue += price
  }
  def createReceipt(price: Int): Receipt = Receipt(price)
}

It contains an immutable map of the prices for each article, and an integer variable representing the revenue. Whenever it receives a Transaction message, it increments the revenue accordingly and returns a printed receipt to the sender.

The Register actor, as already mentioned, is supposed to be a child actor of the Barista actor, which means that we will not create it from our actor system, but from within our Barista actor. The initial version of our actor-come-parent looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
object Barista {
  case object EspressoRequest
  case object ClosingTime
  case class EspressoCup(state: EspressoCup.State)
  object EspressoCup {
    sealed trait State
    case object Clean extends State
    case object Filled extends State
    case object Dirty extends State
  }
  case class Receipt(amount: Int)
}
class Barista extends Actor {
  import Barista._
  import Register._
  import EspressoCup._
  import context.dispatcher
  import akka.util.Timeout
  import akka.pattern.ask
  import akka.pattern.pipe
  import concurrent.duration._

  implicit val timeout = Timeout(4.seconds)
  val register = context.actorOf(Props[Register], "Register")
  def receive = {
    case EspressoRequest =>
      val receipt = register ? Transaction(Espresso)
      receipt.map((EspressoCup(Filled), _)).pipeTo(sender)
    case ClosingTime => context.stop(self)
  }
}

First off, we define the message types that our Barista actor is able to deal with. An EspressoCup can have one out of a fixed set of states, which we ensure by using a sealed trait.

The more interesting part is to be found in the implementation of the Barista class. The dispatcher, ask, and pipe imports as well as the implicit timeout are required because we make use of Akka’s ask syntax and futures in our Receive partial function: When we receive an EspressoRequest, we ask the Register actor for a Receipt for our Transaction. This is then combined with a filled espresso cup and piped to the sender, which will thus receive a tuple of type (EspressoCup, Receipt). This kind of delegating subtasks to child actors and then aggregating or amending their work is typical for actor-based applications.

Also, note how we create our child actor by calling actorOf on our ActorContext instead of the ActorSystem. By doing so, the actor we create becomes a child actor of the one who called this method, instead of a top-level actor whose parent is the guardian actor.

Finally, here is our Customer actor, which, like the Barista actor, will sit at the top level, just below the guardian actor:

1
2
3
4
5
6
7
8
9
10
11
12
13
object Customer {
  case object CaffeineWithdrawalWarning
}
class Customer(coffeeSource: ActorRef) extends Actor with ActorLogging {
  import Customer._
  import Barista._
  import EspressoCup._
  def receive = {
    case CaffeineWithdrawalWarning => coffeeSource ! EspressoRequest
    case (EspressoCup(Filled), Receipt(amount)) =>
      log.info(s"yay, caffeine for ${self}!")
  }
}

It is not terribly interesting for our tutorial, which focuses more on the Barista actor hierarchy. What’s new is the use of the ActorLogging trait, which allows us to write to the log instead of printing to the console.

Now, if we create our actor system and populate it with a Barista and two Customer actors, we can happily feed our two under-caffeinated addicts with a shot of black gold:

1
2
3
4
5
6
7
import Customer._
val system = ActorSystem("Coffeehouse")
val barista = system.actorOf(Props[Barista], "Barista")
val customerJohnny = system.actorOf(Props(classOf[Customer], barista), "Johnny")
val customerAlina = system.actorOf(Props(classOf[Customer], barista), "Alina")
customerJohnny ! CaffeineWithdrawalWarning
customerAlina ! CaffeineWithdrawalWarning

If you try this out, you should see two log messages from happy customers.

To crash or not to crash?

Of course, what we are really interested in, at least in this article, is not happy customers, but the question of what happens if things go wrong.

Our register is a fragile device – its printing functionality is not as reliable as it should be. Every so often, a paper jam causes it to fail. Let’s add a PaperJamException type to the Register companion object:

1
class PaperJamException(msg: String) extends Exception(msg)

Then, let’s change the createReceipt method in our Register actor accordingly:

1
2
3
4
5
6
def createReceipt(price: Int): Receipt = {
  import util.Random
  if (Random.nextBoolean())
    throw new PaperJamException("OMG, not again!")
  Receipt(price)
}

Now, when processing a Transaction message, our Register actor will throw a PaperJamException in about half of the cases.

What effect does this have on our actor system, or on our whole application? Luckily, Akka is very robust and not affected by exceptions in our code at all. What happens, though, is that the parent of the misbehaving child is notified – remember that parents are watching over their children, and this is the situation where they have to decide what to do.

Supervisor strategies

The whole act of being notified about exceptions in child actors, however, is not handled by the parent actor’s Receive partial function, as that would confound the parent actor’s own behaviour with the logic for dealing with failure in its children. Instead, the two responsibilities are clearly separated.

Each actor defines its own supervisor strategy, which tells Akka how to deal with certain types of errors occurring in your children.

There are basically two different types of supervisor strategy, the OneForOneStrategy and the AllForOneStrategy. Choosing the former means that the way you want to deal with an error in one of your children will only affect the child actor from which the error originated, whereas the latter will affect all of your child actors. Which of those strategies is best depends a lot on your individual application.

Regardless of which type of SupervisorStrategy you choose for your actor, you will have to specify a Decider, which is a PartialFunction[Throwable, Directive] – this allows you to match against certain subtypes of Throwable and decide for each of them what’s supposed to happen to your problematic child actor (or all your child actors, if you chose the all-for-one strategy).

Directives

Here is a list of the available directives:

1
2
3
4
5
sealed trait Directive
case object Resume extends Directive
case object Restart extends Directive
case object Stop extends Directive
case object Escalate extends Directive
  • Resume: If you choose to Resume, this probably means that you think of your child actor as a little bit of a drama queen. You decide that the exception was not so exceptional after all – the child actor or actors will simply resume processing messages as if nothing extraordinary had happened.

  • Restart: The Restart directive causes Akka to create a new instance of your child actor or actors. The reasoning behind this is that you assume that the internal state of the child/children is corrupted in some way so that it can no longer process any further messages. By restarting the actor, you hope to put it into a clean state again.

  • Stop: You effectively kill the actor. It will not be restarted.

  • Escalate: If you choose to Escalate, you probably don’t know how to deal with the failure at hand. You delegate the decision about what to do to your own parent actor, hoping they are wiser than you. If an actor escalates, they may very well be restarted themselves by their parent, as the parent will only decide about its own child actors.

The default strategy

You don’t have to specify your own supervisor strategy in each and every actor. In fact, we haven’t done that so far. This means that the default supervisor strategy will take effect. It looks like this:

1
2
3
4
5
6
7
8
final val defaultStrategy: SupervisorStrategy = {
  def defaultDecider: Decider = {
    case _: ActorInitializationException  Stop
    case _: ActorKilledException          Stop
    case _: Exception                     Restart
  }
  OneForOneStrategy()(defaultDecider)
}

This means that for exceptions other than ActorInitializationException or ActorKilledException, the respective child actor in which the exception was thrown will be restarted.

Hence, when a PaperJamException occurs in our Register actor, the supervisor strategy of the parent actor (the barista) will cause the Register to be restarted, because we haven’t overridden the default strategy.

If you try this out, you will likely see an exception stacktrace in the log, but nothing about the Register actor being restarted.

Let’s verify that this is really happening. To do so, however, you will need to learn about the actor lifecycle.

The actor lifecycle

To understand what the directives of a supervisor strategy actually do, it’s crucial to know a little bit about an actor’s lifecycle. Basically, it boils down to this: when created via actorOf, an actor is started. It can then be restarted an arbitrary number of times, in case there is a problem with it. Finally, an actor can be stopped, ultimately leading to its death.

There are numerous lifecycle hook methods that an actor implementation can override. It’s also important to know their default implementations. Let’s go through them briefly:

  • preStart: Called when an actor is started, allowing you to do some initialization logic. The default implementation is empty.
  • postStop: Empty by default, allowing you to clean up resources. Called after stop has been called for the actor.
  • preRestart: Called right before a crashed actor is restarted. By default, it stops all children of that actor and then calls postStop to allow cleaning up of resources.
  • postRestart: Called immediately after an actor has been restarted. Simply calls preStart by default.

Let’s see if our Register gets indeed restarted upon failure by simply adding some log output to its postRestart method. Make the Register type extend the ActorLogging trait and add the following method to it:

1
2
3
4
override def postRestart(reason: Throwable) {
  super.postRestart(reason)
  log.info(s"Restarted because of ${reason.getMessage}")
}

Now, if you send the two Customer actors a bunch of CaffeineWithdrawalWarning messages, you should see the one or the other of those log outputs, confirming that our Register actor has been restarted.

Death of an actor

Often, it doesn’t make sense to restart an actor again and again – think of an actor that talks to some other service over the network, and that service has been unreachable for a while. In such cases, it is a very good idea to tell Akka how often to restart an actor within a certain period of time. If that limit is exceeded, the actor is instead stopped and hence dies. Such a limit can be configured in the constructor of the supervisor strategy:

1
2
3
4
5
6
import scala.concurrent.duration._
import akka.actor.OneForOneStrategy
import akka.actor.SupervisorStrategy.Restart
OneForOneStrategy(10, 2.minutes) {
  case _ => Restart
}

The self-healing system?

So, is our system running smoothly, healing itself whenever this damn paper jam occurs? Let’s change our log output:

1
2
3
4
override def postRestart(reason: Throwable) {
  super.postRestart(reason)
  log.info(s"Restarted, and revenue is $revenue cents")
}

And while we are at it, let’s also add some more logging to our Receive partial function, making it look like this:

1
2
3
4
5
6
7
def receive = {
  case Transaction(article) =>
    val price = prices(article)
    sender ! createReceipt(price)
    revenue += price
    log.info(s"Revenue incremented to $revenue cents")
}

Ouch! Something is clearly not as it should be. In the log, you will see the revenue increasing, but as soon as there is a paper jam and the Register actor restarts, it is reset to 0. This is because restarting indeed means that the old instance is discarded and a new one created as per the Props we initially passed to actorOf.

Of course, we could change our supervisor strategy, so that it resumes in case of a PaperJamException. We would have to add this to the Barista actor:

1
2
3
4
5
val decider: PartialFunction[Throwable, Directive] = {
  case _: PaperJamException => Resume
}
override def supervisorStrategy: SupervisorStrategy =
  OneForOneStrategy()(decider.orElse(SupervisorStrategy.defaultStrategy.decider))

Now, the actor is not restarted upon a PaperJamException, so its state is not reset.

Error kernel

So we just found a nice solution to preserve the state of our Register actor, right?

Well, sometimes, simply resuming might be the best thing to do. But let’s assume that we really have to restart it, because otherwise the paper jam will not disappear. We can simulate this by maintaining a boolean flag that says if we are in a paper jam situation or not. Let’s change our Register like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class Register extends Actor with ActorLogging {
  import Register._
  import Barista._
  var revenue = 0
  val prices = Map[Article, Int](Espresso -> 150, Cappuccino -> 250)
  var paperJam = false
  override def postRestart(reason: Throwable) {
    super.postRestart(reason)
    log.info(s"Restarted, and revenue is $revenue cents")
  }
  def receive = {
    case Transaction(article) =>
      val price = prices(article)
      sender ! createReceipt(price)
      revenue += price
      log.info(s"Revenue incremented to $revenue cents")
  }
  def createReceipt(price: Int): Receipt = {
    import util.Random
    if (Random.nextBoolean()) paperJam = true
    if (paperJam) throw new PaperJamException("OMG, not again!")
    Receipt(price)
  }
}

Also remove the supervisor strategy we added to the Barista actor.

Now, the paper jam remains forever, until we have restarted the actor. Alas, we cannot do that without also losing important state regarding our revenue.

This is where the error kernel pattern comes in. Basically, it is just a simple guideline you should always try to follow, stating that if an actor carries important internal state, then it should delegate dangerous tasks to child actors, so as to prevent the state-carrying actor from crashing. Sometimes, it may make sense to spawn a new child actor for each such task, but that’s not a necessity.

The essence of the pattern is to keep important state as far at the top of the actor hierarchy as possible, while pushing error-prone tasks as far to the bottom of the hierarchy as possible.

Let’s apply this pattern to our Register actor. We will keep the revenue state in the Register actor, but move the error-prone behaviour of printing the receipt to a new child actor, which we appropriately enough call ReceiptPrinter. Here is the latter:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
object ReceiptPrinter {
  case class PrintJob(amount: Int)
  class PaperJamException(msg: String) extends Exception(msg)
}
class ReceiptPrinter extends Actor with ActorLogging {
  var paperJam = false
  override def postRestart(reason: Throwable) {
    super.postRestart(reason)
    log.info(s"Restarted, paper jam == $paperJam")
  }
  def receive = {
    case PrintJob(amount) => sender ! createReceipt(amount)
  }
  def createReceipt(price: Int): Receipt = {
    if (Random.nextBoolean()) paperJam = true
    if (paperJam) throw new PaperJamException("OMG, not again!")
    Receipt(price)
  }
}

Again, we simulate the paper jam with a boolean flag and throw an exception each time someone asks us to print a receipt while in a paper jam. Other than the new message type, PrintJob, this is really just extracted from the Register type.

This is a good thing, not only because it moves away this dangerous operation from the stateful Register actor, but it also makes our code simpler and consequently easier to reason about: The ReceiptPrinter actor is responsible for exactly one thing, and the Register actor has become simpler, too, now being only responsible for managing the revenue, delegating the remaining functionality to a child actor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class Register extends Actor with ActorLogging {
  import akka.pattern.ask
  import akka.pattern.pipe
  import context.dispatcher
  implicit val timeout = Timeout(4.seconds)
  var revenue = 0
  val prices = Map[Article, Int](Espresso -> 150, Cappuccino -> 250)
  val printer = context.actorOf(Props[ReceiptPrinter], "Printer")
  override def postRestart(reason: Throwable) {
    super.postRestart(reason)
    log.info(s"Restarted, and revenue is $revenue cents")
  }
  def receive = {
    case Transaction(article) =>
      val price = prices(article)
      val requester = sender
      (printer ? PrintJob(price)).map((requester, _)).pipeTo(self)
    case (requester: ActorRef, receipt: Receipt) =>
      revenue += receipt.amount
      log.info(s"revenue is $revenue cents")
      requester ! receipt
  }
}

We don’t spawn a new ReceiptPrinter for each Transaction message we get. Instead, we use the default supervisor strategy to have the printer actor restart upon failure.

One part that merits explanation is the weird way we increment our revenue: First we ask the printer for a receipt. We map the future to a tuple containing the answer as well as the requester, which is the sender of the Transaction message and pipe this to ourselves. When processing that message, we finally increment the revenue and send the receipt to the requester.

The reason for that indirection is that we want to make sure that we only increment our revenue if the receipt was successfully printed. Since it is vital to never ever modify the internal state of an actor inside of a future, we have to use this level of indirection. It helps us make sure that we only change the revenue within the confines of our actor, and not on some other thread.

Assigning the sender to a val is necessary for similar reasons: When mapping a future, we are no longer in the context of our actor either – since sender is a method, it would now likely return the reference to some other actor that has sent us a message, not the one we intended.

Now, our Register actor is safe from constantly being restarted, yay!

Of course, the very idea of having the printing of the receipt and the management of the revenue in one place is questionable. Having them together came in handy for demonstrating the error kernel pattern. Yet, it would certainly be a lot better to seperate the receipt printing from the revenue management altogether, as these are two concerns that don’t really belong together.

Timeouts

Another thing that we may want to improve upon is the handling of timeouts. Currently, when an exception occurs in the ReceiptPrinter, this leads to an AskTimeoutException, which, since we are using the ask syntax, comes back to the Barista actor in an unsuccessfully completed Future.

Since the Barista actor simply maps over that future (which is success-biased) and then pipes the transformed result to the customer, the customer will also receive a Failure containing an AskTimeoutException.

The Customer didn’t ask for anything, though, so it is certainly not expecting such a message, and in fact, it currently doesn’t handle these messages. Let’s be friendly and send customers a ComebackLater message – this is a message they already understand, and it makes them try to get an espresso at a later point. This is clearly better, as the current solution means they will never know that they will not get their espresso.

To achieve this, let’s recover from AskTimeoutException failures by mapping them to ComebackLater messages. The Receive partial function of our Barista actor thus now looks like this:

1
2
3
4
5
6
7
8
def receive = {
  case EspressoRequest =>
    val receipt = register ? Transaction(Espresso)
    receipt.map((EspressoCup(Filled), _)).recover {
      case _: AskTimeoutException => ComebackLater
    } pipeTo(sender)
  case ClosingTime => context.system.shutdown()
}

Now, the Customer actors know they can try their luck later, and after trying often enough, they should finally get their eagerly anticipated espresso.

Death Watch

Another principle that is important in order to keep your system fault-tolerant is to keep a watch on important dependencies – dependencies as opposed to children.

Sometimes, you have actors that depend on other actors without the latter being their children. This means that they can’t be their supervisors. Yet, it is important to keep a watch on their state and be notified if bad things happen.

Think, for instance, of an actor that is responsible for database access. You will want actors that require this actor to be alive and healthy to know when that is no longer the case. Maybe you want to switch your system to a maintenance mode in such a situation. For other use cases, simply using some kind of backup actor as a replacement for the dead actor may be a viable solution.

In any case, you will need to place a watch on an actor you depend on in order to get the sad news of its passing away. This is done by calling the watch method defined on ActorContext. To illustrate, let’s have our Customer actors watch the Barista – they are highly addicted to caffeine, so it’s fair to say they depend on the barista:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class Customer(coffeeSource: ActorRef) extends Actor with ActorLogging {
  import context.dispatcher

  context.watch(coffeeSource)

  def receive = {
    case CaffeineWithdrawalWarning => coffeeSource ! EspressoRequest
    case (EspressoCup(Filled), Receipt(amount)) =>
      log.info(s"yay, caffeine for ${self}!")
    case ComebackLater =>
      log.info("grumble, grumble")
      context.system.scheduler.scheduleOnce(300.millis) {
        coffeeSource ! EspressoRequest
      }
    case Terminated(barista) =>
      log.info("Oh well, let's find another coffeehouse...")
  }
}

We start watching our coffeeSource in our constructor, and we added a new case for messages of type Terminated – this is the kind of message we will receive from Akka if an actor we watch dies.

Now, if we send a ClosingTime to the message and the Barista tells its context to stop itself, the Customer actors will be notified. Give it a try, and you should see their output in the log.

Instead of simply logging that we are not amused, this could just as well initiate some failover logic, for instance.

Summary

In this part of the series, which is the second one dealing with actors and Akka, you got to know some of the important components of an actor system, all while learning how to put the tools provided by Akka and the ideas behind it to use in order to make your system more fault-tolerant.

While there is still a lot more to learn about the actor model and Akka, we shall leave it at that for now, as this would go beyond the scope of this series. In the next part, which shall bring this series to a conclusion, I will point you to a bunch of Scala resources you may want to peruse to continue your journey through Scala land, and if actors and Akka got you excited, there will be something in there for you, too.

The Neophyte’s Guide to Scala Part 14: The Actor Approach to Concurrency

After several articles about how you can leverage the Scala type system to achieve a great amount of both flexibility and compile-time safety, we are now shifting back to a topic that we already tackled previously in this series: Scala’s take on concurrency.

In these earlier articles, you learned about an approach that allows you to work asynchronously by making use of composable Futures.

This approach is a very good fit for numerous problems. However, it’s not the only one Scala has to offer. A second cornerstone of Scala concurrency is the Actor model. It provides an approach to concurrency that is entirely based on passing messages between processes.

Actors are not a new idea – the most prominent implementation of this model can be found in Erlang. The Scala core library has had its own actors library for a long time, but it faces the destiny of deprecation in the coming Scala version 2.11, when it will ultimately be replaced by the actors implementation provided by the Akka toolkit, which has been a de-facto standard for actor-based development with Scala for quite a while.

In this article, you will be introduced to the rationale behind Akka’s actor model and learn the basics of coding within this paradigm using the Akka toolkit. It is by no means an in-depth discussion of everything you need to know about Akka actors, and in that, it differs from most of the previous articles in this series. Rather, the intention is to familiarize you with the Akka mindset and serve as an initial spark to get you excited about it.

The problems with shared mutable state

The predominant approach to concurrency today is that of shared mutable state – a large number of stateful objects whose state can be changed by multiple parts of your application, each running in their own thread. Typically, the code is interspersed with read and write locks, to make sure that the state can only be changed in a controlled way and prevent multiple threads from mutating it simultaneously. At the same time, we are trying hard not to lock too big a block of code, as this can drastically slow down the application.

More often than not, code like this has originally been written without having concurrency in mind at all – only to be made fit for a multi-threaded world once the need arose. While writing software without the need for concurrency like this leads to very straightforward code, adapting it to the needs of a concurrent world leads to code that is really, really difficult to read and understand.

The core problem is that low-level synchronization constructs like locks and threads are very hard to reason about. As a consequence, it’s very hard to get it right: If you can’t easily reason about what’s going on, you can be sure that nasty bugs will ensue, from race conditions to deadlocks or just strange behaviour – maybe you’ll only notice after some months, long after your code has been deployed to your production servers.

Also, working with these low-level constructs makes it a real challenge to achieve an acceptable performance.

The Actor model

The Actor programming model is aimed at avoiding all the problems described above, allowing you to write highly performant concurrent code that is easy to reason about. Unlike the widely used approach of shared mutable state, it requires you to design and write your application from the ground up with concurrency in mind – it’s not really possible to add support for it later on.

The idea is that your application consists of lots of light-weight entities called actors. Each of these actors is responsible for only a very small task, and is thus easy to reason about. A more complex business logic arises out of the interaction between several actors, delegating tasks to others or passing messages to collaborators for other reasons.

The Actor System

Actors are pitiful creatures: They cannot live on their own. Rather, each and every actor in Akka resides in and is created by an actor system. Aside from allowing you to create and find actors, an ActorSystem provides for a whole bunch of additional functionality, none of which shall concern us right now.

In order to try out the example code, please add the following resolver and dependency to your SBT-based Scala 2.10 project first:

1
2
3
resolvers += "Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases"

libraryDependencies += "com.typesafe.akka" %% "akka-actor" % "2.2.3"

Now, let’s create an ActorSystem. We’ll need it as an environment for our actors:

1
2
3
4
5
import akka.actor.ActorSystem
object Barista extends App {
  val system = ActorSystem("Barista")
  system.shutdown()
}

We created a new instance of ActorSystem and gave it the name "Barista" – we are returning to the domain of coffee, which should be familiar from the article on composable futures.

Finally, we are good citizens and shut down our actor system once we no longer need it.

Defining an actor

Whether your application consists of a few dozen or a few million actors totally depends on your use case, but Akka is absolutely okay with a few million. You might be baffled by this insanely high number. It’s important to understand that there is not a one-to-one relationship between an actor and a thread. You would soon run out of memory if that were the case. Rather, due to the non-blocking nature of actors, one thread can execute many actors – switching between them depending on which of them has messages to be processed.

To understand what is actually happening, let’s first create a very simple actor, a Barista that can receive orders but doesn’t really do anything apart from printing messages to the console:

1
2
3
4
5
6
7
8
9
10
11
sealed trait CoffeeRequest
case object CappuccinoRequest extends CoffeeRequest
case object EspressoRequest extends CoffeeRequest

import akka.actor.Actor
class Barista extends Actor {
  def receive = {
    case CappuccinoRequest => println("I have to prepare a cappuccino!")
    case EspressoRequest => println("Let's prepare an espresso.")
  }
}

First, we define the types of messages that our actor understands. Typically, case classes are used for messages sent between actors if you need to pass along any parameters. If all the actor needs is an unparameterized message, this message is typically represented as a case object – which is exactly what we are doing here.

In any case, it’s crucial that your messages are immutable, or else bad things will happen.

Next, let’s have a look at our class Barista, which is the actual actor, extending the aptly named Actor trait. Said trait defines a method receive which returns a value of type Receive. The latter is really only a type alias for PartialFunction[Any, Unit].

Processing messages

So what’s the meaning of this receive method? The return type, PartialFunction[Any, Unit] may seem strange to you in more than one respect.

In a nutshell, the partial function returned by the receive method is responsible for processing your messages. Whenever another part of your software – be it another actor or not – sends your actor a message, Akka will eventually let it process this message by calling the partial function returned by your actor’s receive method, passing it the message as an argument.

Side-effecting

When processing a message, an actor can do whatever you want it to, apart from returning a value.

Wat!?

As the return type of Unit suggests, your partial function is side-effecting. This might come as a bit of a shock to you after we emphasized the usage of pure functions all the time. For a concurrent programming model, this actually makes a lot of sense. Actors are where your state is located, and having some clearly defined places where side-effects will occur in a controllable manner is totally fine – each message your actor receives is processed in isolation, one after another, so there is no need to reason about synchronization or locks.

Untyped

But… this partial function is not only side-effecting, it’s also as untyped as you can get in Scala, expecting an argument of type Any. Why is that, when we have such a powerful type system at our fingertips?

This has a lot to do with some important design choices in Akka that allow you to do things like forwarding messages to other actors, installing load balancing or proxying actors without the sender having to know anything about them and so on.

In practice, this is usually not a problem. With the messages themselves being strongly typed, you typically use pattern matching for processing those types of messages you are interested in, just as we did in our tiny example above.

Sometimes though, the weakly typed actors can indeed lead to nasty bugs the compiler can’t catch for you. If you have grown to love the benefits of a strong type system and think you don’t want to go away from that at any costs for some parts of your application, you may want to look at Akka’s new experimental Typed Channels feature.

Asynchronous and non-blocking

I wrote above that Akka would let your actor eventually process a message sent to it. This is important to keep in mind: Sending a message and processing it is done in an asynchronous and non-blocking fashion. The sender will not be blocked until the message has been processed by the receiver. Instead, they can immediately continue with their own work. Maybe they expect to get a messsage from your actor in return after a while, or maybe they are not interested in hearing back from your actor at all.

What really happens when some component sends a message to an actor is that this message is delivered to the actor’s mailbox, which is basically a queue. Placing a message in an actor’s mailbox is a non-blocking operation, i.e. the sender doesn’t have to wait until the message is actually enqueued in the recipient’s mailbox.

The dispatcher will notice the arrival of a new message in an actor’s mailbox, again asynchronously. If the actor is not already processing a previous message, it is now allocated to one of the threads available in the execution context. Once the actor is done processing any previous messages, the dispatcher sends it the next message from its mailbox for processing.

The actor blocks the thread to which it is allocated for as long as it takes to process the message. While this doesn’t block the sender of the message, it means that lengthy operations degrade overall performance, as all the other actors have to be scheduled for processing messages on one of the remaining threads.

Hence, a core principle to follow for your Receive partial functions is to spend as little time inside them as possible. Most importantly, avoid calling blocking code inside your message processing code, if possible at all.

Of course, this is something you can’t prevent doing completely – the majority of database drivers nowadays is still blocking, and you will want to be able to persist data or query for it from your actor-based application. There are solutions to this dilemma, but we won’t cover them in this introductory article.

Creating an actor

Defining an actor is all well and good, but how do we actually use our Barista actor in our application? To do that, we have to create a new instance of our Barista actor. You might be tempted to do it the usual way, by calling its constructor like so:

1
val barista = new Barista // will throw exception

This will not work! Akka will thank you with an ActorInitializationException. The thing is, in order for the whole actor thingie to work properly, your actors need to be managed by the ActorSystem and its components. Hence, you have to ask the actor system for a new instance of your actor:

1
2
import akka.actor.{ActorRef, Props}
val barista: ActorRef = system.actorOf(Props[Barista], "Barista")

The actorOf method defined on ActorSystem expects a Props instance, which provides a means of configuring newly created actors, and, optionally, a name for your actor instance. We are using the simplest form of creating such a Props instance, providing the apply method of the companion object with a type parameter. Akka will then create a new instance of the actor of the given type by calling its default constructor.

Be aware that the type of the object returned by actorOf is not Barista, but ActorRef. Actors never communicate with another directly and hence there are supposed to be no direct references to actor instances. Instead, actors or other components of your application aquire references to the actors they need to send messages to.

Thus, an ActorRef acts as some kind of proxy to the actual actor. This is convenient because an ActorRef can be serialized, allowing it to be a proxy for a remote actor on some other machine. For the component aquiring an ActorRef, the location of the actor – local in the same JVM or remote on some other machine – is completely transparent. We call this property location transparency.

Please note that ActorRef is not parameterized by type. Any ActorRef can be exchanged for another, allowing you to send arbitrary messages to any ActorRef. This is by design and, as already mentioned above, allows for easily modifying the topology of your actor system wihout having to make any changes to the senders.

Sending messages

Now that we have created an instance of our Barista actor and got an ActorRef linked to it, we can send it a message. This is done by calling the ! method on the ActorRef:

1
2
3
barista ! CappuccinoRequest
barista ! EspressoRequest
println("I ordered a cappuccino and an espresso")

Calling the ! is a fire-and-forget operation: You tell the Barista that you want a cappuccino, but you don’t wait for their response. It’s the most common way in Akka for interacting with other actors. By calling this method, you tell Akka to enqueue your message in the recipient’s mailbox. As described above, this doesn’t block, and eventually the recipient actor will process your message.

Due to the asynchronous nature, the result of the above code is not deterministic. It might look like this:

1
2
3
I have to prepare a cappuccino!
I ordered a cappuccino and an espresso
Let's prepare an espresso.

Even though we first sent the two messages to the Barista actor’s mailbox, between the processing of the first and second message, our own output is printed to the console.

Answering to messages

Sometimes, being able to tell others what to do just doesn’t cut it. You would like to be able to answer by in turn sending a message to the sender of a message you got – all asynchronously of course.

To enable you to do that and lots of other things that are of no concern to us right now, actors have a method called sender, which returns the ActorRef of the sender of the last message, i.e. the one you are currently processing.

But how does it know about that sender? The answer can be found in the signature of the ! method, which has a second, implicit parameter list:

1
def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit

When called from an actor, its ActorRef is passed on as the implicit sender argument.

Let’s change our Barista so that they immediately send a Bill to the sender of a CoffeeRequest before printing their usual output to the console:

1
2
3
4
5
6
7
8
9
10
11
12
13
case class Bill(cents: Int)
case object ClosingTime
class Barista extends Actor {
  def receive = {
    case CappuccinoRequest =>
      sender ! Bill(250)
      println("I have to prepare a cappuccino!")
    case EspressoRequest =>
      sender ! Bill(200)
      println("Let's prepare an espresso.")
    case ClosingTime => context.system.shutdown()
  }
}

While we are at it, we are introducing a new message, ClosingTime. The Barista reacts to it by shutting down the actor system, which they, like all actors, can access via their ActorContext.

Now, let’s introduce a second actor representing a customer:

1
2
3
4
5
6
7
case object CaffeineWithdrawalWarning
class Customer(caffeineSource: ActorRef) extends Actor {
  def receive = {
    case CaffeineWithdrawalWarning => caffeineSource ! EspressoRequest
    case Bill(cents) => println(s"I have to pay $cents cents, or else!")
  }
}

This actor is a real coffee junkie, so it needs to be able to order new coffee. We pass it an ActorRef in the constructor – for the Customer, this is simply its caffeineSource – it doesn’t know whether this ActorRef points to a Barista or something else. It knows that it can send CoffeeRequest messages to it, and that is all that matters to them.

Finally, we need to create these two actors and send the customer a CaffeineWithdrawalWarning to get things rolling:

1
2
3
4
val barista = system.actorOf(Props[Barista], "Barista")
val customer = system.actorOf(Props(classOf[Customer], barista), "Customer")
customer ! CaffeineWithdrawalWarning
barista ! ClosingTime

Here, for the Customer actor, we are using a different factory method for creating a Props instance: We pass in the type of the actor we want to have instantiated as well as the constructor arguments that actor takes. We need to do this because we want to pass the ActorRef of our Barista actor to the constructor of the Customer actor.

Sending the CaffeineWithdrawalWarning to the customer makes it send an EspressoRequest to the barista who will then send a Bill back to the customer. The output of this may look like this:

1
2
Let's prepare an espresso.
I have to pay 200 cents, or else!

First, while processing the EspressoRequest message, the Barista sends a message to the sender of that message, the Customer actor. However, this operation doesn’t block until the latter processes it. The Barista actor can continue processing the EspressoRequest immediately, and does this by printing to the console. Shortly after, the Customer starts to process the Bill message and in turn prints to the console.

Asking questions

Sometimes, sending an actor a message and expecting a message in return at some later time isn’t an option – the most common place where this is the case is in components that need to interface with actors, but are not actors themselves. Living outside of the actor world, they cannot receive messages.

For situations such as these, there is Akka’s ask support, which provides some sort of bridge between actor-based and future-based concurrency. From the client perspective, it works like this:

1
2
3
4
5
6
7
8
9
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.duration._
implicit val timeout = Timeout(2.second)
implicit val ec = system.dispatcher
val f: Future[Any] = barista2 ? CappuccinoRequest
f.onSuccess {
  case Bill(cents) => println(s"Will pay $cents cents for a cappuccino")
}

First, you need to import support for the ask syntax and create an implicit timeout for the Future returned by the ? method. Also, the Future needs an ExecutionContext. Here, we simply use the default dispatcher of our ActorSystem, which is conveniently also an ExecutionContext.

As you can see, the returned future is untyped – it’s a Future[Any]. This shouldn’t come as a surprise, since it’s really a received message from an actor, and those are untyped, too.

For the actor that is being asked, this is actually the same as sending some message to the sender of a processed message. This is why asking our Barista works out of the box without having to change anything in our Barista actor.

Once the actor being asked sends a message to the sender, the Promise belonging to the returned Future is completed.

Generally, telling is preferable to asking, because it’s more resource-sparing. Akka is not for polite people! However, there are situations where you really need to ask, and then it’s perfectly fine to do so.

Stateful actors

Each actor may maintain an internal state, but that’s not strictly necessary. Sometimes, a large part of the overall application state consists of the information carried by the immutable messages passed between actors.

An actor only ever processes one message at a time. While doing so, it may modify its internal state. This means that there is some kind of mutable state in an actor, but since each message is processed in isolation, there is no way the internal state of our actor can get messed up due to concurrency problems.

To illustrate, let’s turn our stateless Barista into an actor carrying state, by simply counting the number of orders:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class Barista extends Actor {
  var cappuccinoCount = 0
  var espressoCount = 0
  def receive = {
    case CappuccinoRequest =>
      sender ! Bill(250)
      cappuccinoCount += 1
      println(s"I have to prepare cappuccino #$cappuccinoCount")
    case EspressoRequest =>
      sender ! Bill(200)
      espressoCount += 1
      println(s"Let's prepare espresso #$espressoCount.")
    case ClosingTime => context.system.shutdown()
  }
}

We introduced two vars, cappuccinoCount and espressoCount that are incremented with each respective order. This is actually the first time in this series that we have used a var. While to be avoided in functional programming, they are really the only way to allow your actors to carry state. Since each message is processed in isolation, our above code is similar to using AtomicInteger values in a non-actor environment.

Conclusion

And here ends our introduction to the actor programming model for concurrency and how to work within this paradigm using Akka. While we have really only scratched the surface and have ignored some important concepts of Akka, I hope to have given enough of an insight into this approach to concurrency to give you a basic understanding and get you interested in learning more.

In the coming articles, I will elaborate our little example, adding some meaningful behaviour to it while introducing more of the ideas behind Akka actors, among them the question of how errors are handled in an actor system.

P.S. Please note that starting with this article I have switched to a biweekly schedule for the remaining parts of this series.

The Neophyte’s Guide to Scala Part 13: Path-dependent Types

In last week’s article, I introduced you to the idea of type classes – a pattern that allows you to design your programs to be open for extension without giving up important information about concrete types. This week, I’m going to stick with Scala’s type system and talk about one of its features that distinguishes it from most other mainstream programming languages: Scala’s form of dependent types, in particular path-dependent types and dependent method types.

One of the most widely used arguments against static typing is that “the compiler is just in the way” and that in the end, it’s all only data, so why care about building up a complex hierarchy of types?

In the end, having static types is all about preventing bugs by allowing the über-smart compiler to humiliate you on a regular basis, making sure you’re doing the right thing before it’s too late.

Using path-dependent types is one powerful way to help the compiler prevent you from introducing bugs, as it places logic that is usually only available at runtime into types.

Sometimes, accidentally introducing path-dependent types can lead to frustation, though, especially if you have never heard of them. Hence, it’s definitely a good idea to get familiar with them, whether you decide to put them to use or not.

The problem

I will start by presenting a problem that path-dependent types can help us solving: In the realm of fan fiction, the most atrocious things happen – usually, the involved characters will end up making out with each other, regardless how inappropriate it is. There is even crossover fan fiction, in which two characters from different franchises are making out with each other.

However, elitist fan fiction writers look down on this. Surely there is a way to prevent such wrongdoing! Here is a first version of our domain model:

1
2
3
4
5
6
7
8
9
object Franchise {
   case class Character(name: String)
 }
class Franchise(name: String) {
  import Franchise.Character
  def createFanFiction(
    lovestruck: Character,
    objectOfDesire: Character): (Character, Character) = (lovestruck, objectOfDesire)
}

Characters are represented by instances of the Character case class, and the Franchise class has a method to create a new piece of fan fiction about two characters. Let’s create two franchises and some characters:

1
2
3
4
5
6
7
8
val starTrek = new Franchise("Star Trek")
val starWars = new Franchise("Star Wars")

val quark = Franchise.Character("Quark")
val jadzia = Franchise.Character("Jadzia Dax")

val luke = Franchise.Character("Luke Skywalker")
val yoda = Franchise.Character("Yoda")

Unfortunately, at the moment we are unable to prevent bad things from happening:

1
starTrek.createFanFiction(lovestruck = jadzia, objectOfDesire = luke)

Horrors of horrors! Someone has created a piece of fan fiction in which Jadzia Dax is making out with Luke Skywalker. Preposterous! Clearly, we should not allow this. Your first intuition might be to somehow check at runtime that two characters making out are from the same franchise. For example, we could change the model like so:

1
2
3
4
5
6
7
8
9
10
11
12
object Franchise {
  case class Character(name: String, franchise: Franchise)
}
class Franchise(name: String) {
  import Franchise.Character
  def createFanFiction(
      lovestruck: Character,
      objectOfDesire: Character): (Character, Character) = {
    require(lovestruck.franchise == objectOfDesire.franchise)
    (lovestruck, objectOfDesire)
  }
}

Now, the Character instances have a reference to their Franchise, and trying to create a fan fiction with characters from different franchises will lead to an IllegalArgumentException (feel free to try this out in a REPL).

Safer fiction with path-dependent types

This is pretty good, isn’t it? It’s the kind of fail-fast behaviour we have been indoctrinated with for years. However, with Scala, we can do better. There is a way to fail even faster – not at runtime, but at compile time. To achieve that, we need to encode the connection between a Character and its Franchise at the type level.

Luckily, the way Scala’s nested types work allow us to do that. In Scala, a nested type is bound to a specific instance of the outer type, not to the outer type itself. This means that if you try to use an instance of the inner type outside of the instance of the enclosing type, you will face a compile error:

1
2
3
4
5
6
7
8
9
10
class A {
  class B
  var b: Option[B] = None
}
val a1 = new A
val a2 = new A
val b1 = new a1.B
val b2 = new a2.B
a1.b = Some(b1)
a2.b = Some(b1) // does not compile

You cannot simply assign an instance of the B that is bound to a2 to the field on a1 – the one is an a2.B, the other expects an a1.B. The dot syntax represents the path to the type, going along concrete instances of other types. Hence the name, path-dependent types.

We can put these to use in order to prevent characters from different franchises making out with each other:

1
2
3
4
5
6
class Franchise(name: String) {
  case class Character(name: String)
  def createFanFictionWith(
    lovestruck: Character,
    objectOfDesire: Character): (Character, Character) = (lovestruck, objectOfDesire)
}

Now, the type Character is nested in the type Franchise, which means that it is dependent on a specific enclosing instance of the Franchise type.

Let’s create our example franchises and characters again:

1
2
3
4
5
6
7
8
val starTrek = new Franchise("Star Trek")
val starWars = new Franchise("Star Wars")

val quark = starTrek.Character("Quark")
val jadzia = starTrek.Character("Jadzia Dax")

val luke = starWars.Character("Luke Skywalker")
val yoda = starWars.Character("Yoda")

You can already see in how our Character instances are created that their types are bound to a specific franchise. Let’s see what happens if we try to put some of these characters together:

1
2
starTrek.createFanFictionWith(lovestruck = quark, objectOfDesire = jadzia)
starWars.createFanFictionWith(lovestruck = luke, objectOfDesire = yoda)

These two compile, as expected. They are tasteless, but what can we do?

Now, let’s see what happens if we try to create some fan fiction about Jadzia Dax and Luke Skywalker:

1
starTrek.createFanFictionWith(lovestruck = jadzia, objectOfDesire = luke)

Et voilà: The thing that should not be does not even compile! The compiler complains about a type mismatch:

1
2
3
4
found   : starWars.Character
required: starTrek.Character
               starTrek.createFanFictionWith(lovestruck = jadzia, objectOfDesire = luke)
                                                                                   ^

This technique also works if our method is not defined on the Franchise class, but in some other module. In this case, we can make use of dependent method types, where the type of one parameter depends on a previous parameter:

1
2
def createFanFiction(f: Franchise)(lovestruck: f.Character, objectOfDesire: f.Character) =
  (lovestruck, objectOfDesire)

As you can see, the type of the lovestruck and objectOfDesire parameters depends on the Franchise instance passed to the method. Note that this only works if the instance on which other types depend is in its own parameter list.

Abstract type members

Often, dependent method types are used in conjunction with abstract type members. Suppose we want to develop a hipsterrific key-value store. It will only support setting and getting the value for a key, but in a typesafe manner. Here is our oversimplified implementation:

1
2
3
4
5
6
7
8
9
10
11
12
object AwesomeDB {
  abstract class Key(name: String) {
    type Value
  }
}
import AwesomeDB.Key
class AwesomeDB {
  import collection.mutable.Map
  val data = Map.empty[Key, Any]
  def get(key: Key): Option[key.Value] = data.get(key).asInstanceOf[Option[key.Value]]
  def set(key: Key)(value: key.Value): Unit = data.update(key, value)
}

We have defined a class Key with an abstract type member Value. The methods on AwesomeDB refer to that type without ever knowing or caring about the specific manifestation of this abstract type.

We can now define some concrete keys that we want to use:

1
2
3
4
5
6
7
8
9
10
trait IntValued extends Key {
 type Value = Int
}
trait StringValued extends Key {
  type Value = String
}
object Keys {
  val foo = new Key("foo") with IntValued
  val bar = new Key("bar") with StringValued
}

Now we can set and get key/value pairs in a typesafe manner:

1
2
3
4
val dataStore = new AwesomeDB
dataStore.set(Keys.foo)(23)
val i: Option[Int] = dataStore.get(Keys.foo)
dataStore.set(Keys.foo)("23") // does not compile

Path-dependent types in practice

While path-dependent types are not necessarily omnipresent in your typical Scala code, they do have a lot of practical value beyond modelling the domain of fan fiction.

One of the most widespread uses is probably seen in combination with the cake pattern, which is a technique for composing your components and managing their dependencies, relying solely on features of the language. See the excellent articles by Debasish Ghosh and Precog’s Daniel Spiewak to learn more about both the cake pattern and how it can be improved by incorporating path-dependent types.

In general, whenever you want to make sure that objects created or managed by a specific instance of another type cannot accidentally or purposely be interchanged or mixed, path-dependent types are the way to go.

Path-dependent types and dependent method types play a crucial role for attempts to encode information into types that is typically only known at runtime, for instance heterogenous lists, type-level representations of natural numbers and collections that carry their size in their type. Miles Sabin is exploring the limits of Scala’s type system in this respect in his excellent library Shapeless.

The Neophyte’s Guide to Scala Part 12: Type Classes

After having discussed several functional programming techniques for keeping things DRY and flexible in the last two weeks, in particular function composition, partial function application, and currying, we are going to stick with the general notion of making your code as flexible as possible.

However, this time, we are not looking so much at how you can leverage functions as first-class objects to achieve this goal. Instead, this article is all about using the type system in such a manner that it’s not in the way, but rather supports you in keeping your code extensible: You’re going to learn about type classes.

You might think that this is some exotic idea without practical relevance, brought into the Scala community by some vocal Haskell fanatics. This is clearly not the case. Type classes have become an important part of the Scala standard library and even more so of many popular and commonly used third-party open-source libraries, so it’s generally a good idea to make yourself familiar with them.

I will discuss the idea of type classes, why they are useful, how to benefit from them as a client, and how to implement your own type classes and put them to use for great good.

The problem

Instead of starting off by giving an abstract explanation of what type classes are, let’s tackle this subject by means of an – admittedly simplified, but nevertheless resonably practical – example.

Imagine that we want to write a fancy statistics library. This means we want to provide a bunch of functions that operate on collections of numbers, mostly to compute some aggregate values for them. Imagine further that we are restricted to accessing an element from such a collection by index and to using the reduce method defined on Scala collections. We impose this restriction on ourselves because we are going to re-implement a little bit of what the Scala standard library already provides – simply because it’s a nice example without many distractions, and it’s small enough for a blog post. Finally, our implementation assumes that the values we get are already sorted.

We will start with a very crude implementation of median, quartiles, and iqr numbers of type Double:

1
2
3
4
5
6
7
8
9
10
11
object Statistics {
  def median(xs: Vector[Double]): Double = xs(xs.size / 2)
  def quartiles(xs: Vector[Double]): (Double, Double, Double) =
    (xs(xs.size / 4), median(xs), xs(xs.size / 4 * 3))
  def iqr(xs: Vector[Double]): Double = quartiles(xs) match {
    case (lowerQuartile, _, upperQuartile) => upperQuartile - lowerQuartile
  }
  def mean(xs: Vector[Double]): Double = {
    xs.reduce(_ + _) / xs.size
  }
}

The median cuts a data set in half, whereas the lower and upper quartile (first and third element of the tuple returned by our quartile method), split the lowest and highest 25 percent of the data, respectively. Our iqr method returns the interquartile range, which is the difference between the upper and lower quartile.

Now, of course, we want to support more than just double numbers. So let’s implement all these methods again for Int numbers, right?

Well, no! First of all, that would be a tiny little bit repetitious, wouldn’t it? Also, in situations such as these, we quickly run into situations where we cannot overload a method without some dirty tricks, because the type parameter suffers from type erasure.

If only Int and Double would extend from a common base class or implement a common trait like Number! We could be tempted to change the type required and returned by our methods to that more general type. Our method signatures would look like this:

1
2
3
4
5
6
object Statistics {
  def median(xs: Vector[Number]): Number = ???
  def quartiles(xs: Vector[Number]): (Number, Number, Number) = ???
  def iqr(xs: Vector[Number]): Number = ???
  def mean(xs: Vector[Number]): Number = ???
}

Thankfully, in this case there is no such common trait, so we aren’t tempted to walk this road at all. However, in other cases, that might very well be the case – and still be a bad idea. Not only do we drop previously available type information, we also close our API against future extensions to types whose sources we don’t control: We cannot make some new number type coming from a third party extend the Number trait.

Ruby’s answer to that problem is monkey patching, polluting the global namespace with an extension to that new type, making it act like a Number after all. Java developers who have got beaten up by the Gang of Four in their youth, on the other hand, will think that an Adapter may solve all of their problems:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
object Statistics {
  trait NumberLike[A] {
    def get: A
    def plus(y: NumberLike[A]): NumberLike[A]
    def minus(y: NumberLike[A]): NumberLike[A]
    def divide(y: Int): NumberLike[A]
  }
  case class NumberLikeDouble(x: Double) extends NumberLike[Double] {
    def get: Double = x
    def minus(y: NumberLike[Double]) = NumberLikeDouble(x - y.get)
    def plus(y: NumberLike[Double]) = NumberLikeDouble(x + y.get)
    def divide(y: Int) = NumberLikeDouble(x / y)
  }
  type Quartile[A] = (NumberLike[A], NumberLike[A], NumberLike[A])
  def median[A](xs: Vector[NumberLike[A]]): NumberLike[A] = xs(xs.size / 2)
  def quartiles[A](xs: Vector[NumberLike[A]]): Quartile[A] =
    (xs(xs.size / 4), median(xs), xs(xs.size / 4 * 3))
  def iqr[A](xs: Vector[NumberLike[A]]): NumberLike[A] = quartiles(xs) match {
    case (lowerQuartile, _, upperQuartile) => upperQuartile.minus(lowerQuartile)
  }
  def mean[A](xs: Vector[NumberLike[A]]): NumberLike[A] =
    xs.reduce(_.plus(_)).divide(xs.size)
}

Now we have solved the problem of extensibility: Users of our library can pass in a NumberLike adapter for Int (which we would likely provide ourselves) or for any possible type that might behave like a number, without having to recompile the module in which our statistics methods are implemented.

However, always wrapping your numbers in an adapter is not only tiresome to write and read, it also means that you have to create a lot of instances of your adapter classes when interacting with our library.

Type classes to the rescue!

A powerful alternative to the approaches outlined so far is, of course, to define and use a type class. Type classes, one of the prominent features of the Haskell language, despite their name, haven’t got anything to do with classes in object-oriented programming.

A type class C defines some behaviour in the form of operations that must be supported by a type T for it to be a member of type class C. Whether the type T is a member of the type class C is not inherent in the type. Rather, any developer can declare that a type is a member of a type class simply by providing implementations of the operations the type must support. Now, once T is made a member of the type class C, functions that have constrained one or more of their parameters to be members of C can be called with arguments of type T.

As such, type classes allow ad-hoc and retroactive polymorphism. Code that relies on type classes is open to extension without the need to create adapter objects.

Creating a type class

In Scala, type classes can be implemented and used by a combination of techniques. It’s a little more involved than in Haskell, but also gives developers more control.

Creating a type class in Scala involves several steps. First, let’s define a trait. This is the actual type class:

1
2
3
4
5
6
7
object Math {
  trait NumberLike[T] {
    def plus(x: T, y: T): T
    def divide(x: T, y: Int): T
    def minus(x: T, y: T): T
  }
}

We have created a type class called NumberLike. Type classes always take one or more type parameters, and they are usually designed to be stateless, i.e. the methods defined on our NumberLike trait operate only on the passed in arguments. In particular, where our adapter above operated on its member of type T and one argument, the methods defined for our NumberLike type class take two parameters of type T each – the member has become the first parameter of the operations supported by NumberLike.

Providing default members

The second step in implementing a type class is usually to provide some default implementations of your type class trait in its companion object. We will see in a moment why this is generally a good strategy. First, however, let’s do this, too, by making Double and Int members of our NumberLike type class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
object Math {
  trait NumberLike[T] {
    def plus(x: T, y: T): T
    def divide(x: T, y: Int): T
    def minus(x: T, y: T): T
  }
  object NumberLike {
    implicit object NumberLikeDouble extends NumberLike[Double] {
      def plus(x: Double, y: Double): Double = x + y
      def divide(x: Double, y: Int): Double = x / y
      def minus(x: Double, y: Double): Double = x - y
    }
    implicit object NumberLikeInt extends NumberLike[Int] {
      def plus(x: Int, y: Int): Int = x + y
      def divide(x: Int, y: Int): Int = x / y
      def minus(x: Int, y: Int): Int = x - y
    }
  }
}

Two things: First, you see that the two implementations are basically identical. That is not always the case when creating members of a type classes. Our NumberLike type class is just a rather narrow domain. Later in the article, I will give examples of type classes where there is a lot less room for duplication when implementing them for multiple types. Second, please ignore the fact that we are losing precision in NumberLikeInt by doing integer division. It’s all to keep things simple for this example.

As you can see, members of type classes are usually singleton objects. Also, please note the implicit keyword before each of the type class implementations. This is one of the crucial elements for making type classes possible in Scala, making type class members implicitly available under certain conditions. More about that in the next section.

Coding against type classes

Now that we have our type class and two default implementations for common types, we want to code against this type class in our statistics module. Let’s focus on the mean method for now:

1
2
3
4
5
object Statistics {
  import Math.NumberLike
  def mean[T](xs: Vector[T])(implicit ev: NumberLike[T]): T =
    ev.divide(xs.reduce(ev.plus(_, _)), xs.size)
}

This may look a little intimidating at first, but it’s actually quite simple. Our method takes a type parameter T and a single parameter of type Vector[T].

The idea to constrain a parameter to types that are members of a specific type class is realized by means of the implicit second parameter list. What does this mean? Basically, that a value of type NumberLike[T] must be implicitly available in the current scope. This is the case if an implicit value has been declared and made available in the current scope, very often by importing the package or object in which that implicit value is defined.

If and only if no other implicit value can be found, the compiler will look in the companion object of the type of the implicit parameter. Hence, as a library designer, putting your default type class implementations in the companion object of your type class trait means that users of your library can easily override these implementations with their own ones, which is exactly what you want. Users can also pass in an explicit value for an implicit parameter to override the implicit values that are in scope.

Let’s see if the default type class implementations can be resolved:

1
2
val numbers = Vector[Double](13, 23.0, 42, 45, 61, 73, 96, 100, 199, 420, 900, 3839)
println(Statistics.mean(numbers))

Wonderful! If we try this with a Vector[String], we get an error at compile time, stating that no implicit value could be found for parameter ev: NumberLike[String]. If you don’t like this error message, you can customize it by annotating your type class trait with the @implicitNotFound annotation:

1
2
3
4
5
6
7
8
9
object Math {
  import annotation.implicitNotFound
  @implicitNotFound("No member of type class NumberLike in scope for ${T}")
  trait NumberLike[T] {
    def plus(x: T, y: T): T
    def divide(x: T, y: Int): T
    def minus(x: T, y: T): T
  }
}

Context bounds

A second, implicit parameter list on all methods that expect a member of a type class can be a little verbose. As a shortcut for implicit parameters with only one type parameter, Scala provides so-called context bounds. To show how those are used, we are going to implement our other statistics methods using those instead:

1
2
3
4
5
6
7
8
9
10
11
12
object Statistics {
  import Math.NumberLike
  def mean[T](xs: Vector[T])(implicit ev: NumberLike[T]): T =
    ev.divide(xs.reduce(ev.plus(_, _)), xs.size)
  def median[T : NumberLike](xs: Vector[T]): T = xs(xs.size / 2)
  def quartiles[T: NumberLike](xs: Vector[T]): (T, T, T) =
    (xs(xs.size / 4), median(xs), xs(xs.size / 4 * 3))
  def iqr[T: NumberLike](xs: Vector[T]): T = quartiles(xs) match {
    case (lowerQuartile, _, upperQuartile) =>
      implicitly[NumberLike[T]].minus(upperQuartile, lowerQuartile)
  }
}

A context bound T : NumberLike means that an implicit value of type NumberLike[T] must be available, and so is really equivalent to having a second implicit parameter list with a NumberLike[T] in it. If you want to access that implicitly available value, however, you need to call the implicitly method, as we do in the iqr method. If your type class requires more than one type parameter, you cannot use the context bound syntax.

Custom type class members

As a user of a library that makes use of type classes, you will sooner or later have types that you want to make members of those type classes. For instance, we might want to use the statistics library for instances of the Joda Time Duration type. To do that, we need Joda Time on our classpath, of course:

1
2
3
libraryDependencies += "joda-time" % "joda-time" % "2.1"

libraryDependencies += "org.joda" % "joda-convert" % "1.3"

Now we just have to create an implicitly available implementation of NumberLike (please make sure you have Joda Time on your classpath when trying this out):

1
2
3
4
5
6
7
8
9
object JodaImplicits {
  import Math.NumberLike
  import org.joda.time.Duration
  implicit object NumberLikeDuration extends NumberLike[Duration] {
    def plus(x: Duration, y: Duration): Duration = x.plus(y)
    def divide(x: Duration, y: Int): Duration = Duration.millis(x.getMillis / y)
    def minus(x: Duration, y: Duration): Duration = x.minus(y)
  }
}

If we import the package or object containing this NumberLike implementation, we can now compute the mean value for a bunch of durations:

1
2
3
4
5
6
7
8
9
import Statistics._
import JodaImplicits._
import org.joda.time.Duration._

val durations = Vector(standardSeconds(20), standardSeconds(57), standardMinutes(2),
  standardMinutes(17), standardMinutes(30), standardMinutes(58), standardHours(2),
  standardHours(5), standardHours(8), standardHours(17), standardDays(1),
  standardDays(4))
println(mean(durations).getStandardHours)

Use cases

Our NumberLike type class was a nice exercise, but Scala already ships with the Numeric type class, which allows you to call methods like sum or product on collections for whose type T a Numeric[T] is available. Another type class in the standard library that you will use a lot is Ordering, which allows you to provide an implicit ordering for your own types, available to the sort method on Scala’s collections.

There are more type classes in the standard library, but not all of them are ones you have to deal with on a regular basis as a Scala developer.

A very common use case in third-party libraries is that of object serialization and deserialization, most notably to and from JSON. By making your classes members of an appropriate formatter type class, you can customize the way your classes are serialized to JSON, XML or whatever format is currently the new black.

Mapping between Scala types and ones supported by your database driver is also commonly made customizable and extensible via type classes.

Summary

Once you start to do some serious work with Scala, you will inevitably stumble upon type classes. I hope that after reading this article, you are prepared to take advantage of this powerful technique.

Scala type classes allow you to develop your Scala code in such a way that it’s open for retroactive extension while retaining as much concrete type information as possible. In contrast to approaches from other languages, they give developers full control, as default type class implementations can be overridden without much hassle, and type classes implementations are not made available in the global namespace.

You will see that this technique is especially useful when writing libraries intended to be used by others, but type classes also have their use in application code to decrease coupling between modules.

The Neophyte’s Guide to Scala Part 11: Currying and Partially Applied Functions

Last week’s article was all about avoiding code duplication, either by lifting existing functions to match your new requirements or by composing them. In this article, we are going to have a look at two other mechanisms the Scala language provides in order to enable you to reuse your functions: Partial application of functions and currying.

Partially applied functions

Scala, like many other languages following the functional programming paradigm, allows you to apply a function partially. What this means is that, when applying the function, you do not pass in arguments for all of the parameters defined by the function, but only for some of them, leaving the remaining ones blank. What you get back is a new function whose parameter list only contains those parameters from the original function that were left blank.

Do not confuse partially applied functions with partially defined functions, which are represented by the PartialFunction type in Scala.

To illustrate how partial function application works, let’s revisit our example from last week: For our imaginary free mail service, we wanted to allow the user to configure a filter so that only emails meeting certain criteria would show up in their inbox, with all others being blocked.

Our Email case class still looks like this:

1
2
3
4
5
6
case class Email(
  subject: String,
  text: String,
  sender: String,
  recipient: String)
type EmailFilter = Email => Boolean

The criteria for filtering emails were represented by a predicate Email => Boolean, which we aliased to the type EmailFilter, and we were able to generate new predicates by calling appropriate factory functions.

Two of the factory functions from last week’s article created EmailFiter instances that checked if the email text satisfied a given minimum or maximum length. This time we want to make use of partial function application to implement these factory functions. We want to have a general method sizeConstraint and be able to create more specific size constraints by fixing some of its parameters.

Here is our revised sizeConstraint method:

1
2
type IntPairPred = (Int, Int) => Boolean
def sizeConstraint(pred: IntPairPred, n: Int, email: Email) = pred(email.text.size, n)

We also define an alias IntPairPred for the type of predicate that checks a pair of integers (the value n and the text size of the email) and returns whether the email text size is okay, given n.

Note that unlike our sizeConstraint function from last week, this one does not return a new EmailFilter predicate, but simply evaluates all the arguments passed to it, returning a Boolean. The trick is to get such a predicate of type EmailFilter by partially applying sizeConstraint.

First, however, because we take the DRY principle very seriously, let’s define all the commonly used instances of IntPairPred. Now, when we call sizeConstraint, we don’t have to repeatedly write the same anonymous functions, but can simply pass in one of those:

1
2
3
4
5
val gt: IntPairPred = _ > _
val ge: IntPairPred = _ >= _
val lt: IntPairPred = _ < _
val le: IntPairPred = _ <= _
val eq: IntPairPred = _ == _

Finally, we are ready to do some partial application of the sizeConstraint function, fixing its first parameter with one of our IntPairPred instances:

1
2
val minimumSize: (Int, Email) => Boolean = sizeConstraint(ge, _: Int, _: Email)
val maximumSize: (Int, Email) => Boolean = sizeConstraint(le, _: Int, _: Email)

As you can see, you have to use the placeholder _ for all parameters not bound to an argument value. Unfortunately, you also have to specify the type of those arguments, which makes partial function application in Scala a bit tedious.

The reason is that the Scala compiler cannot infer these types, at least not in all cases – think of overloaded methods where it’s impossible for the compiler to know which of them you are referring to.

On the other hand, this makes it possible to bind or leave out arbitrary parameters. For example, we can leave out the first one and pass in the size to be used as a constraint:

1
2
val constr20: (IntPairPred, Email) => Boolean = sizeConstraint(_: IntPairPred, 20, _: Email)
val constr30: (IntPairPred, Email) => Boolean = sizeConstraint(_: IntPairPred, 30, _: Email)

Now we have two functions that take an IntPairPred and an Email and compare the email text size to 20 and 30, respectively, but the comparison logic has not been specified yet, as that’s exactly what the IntPairPred is good for.

This shows that, while quite verbose, partial function application in Scala is a little more flexible than in Clojure, for example, where you have to pass in arguments from left to right, but can’t leave out any in the middle.

From methods to function objects

When doing partial application on a method, you can also decide to not bind any parameters whatsoever. The parameter list of the returned function object will be the same as for the method. You have effectively turned a method into a function that can be assigned to a val or passed around:

1
val sizeConstraintFn: (IntPairPred, Int, Email) => Boolean = sizeConstraint _

Producing those EmailFilters

We still haven’t got any functions that adhere to the EmailFilter type or that return new predicates of that type – like sizeConstraint, minimumSize and maximumSize don’t return a new predicate, but a Boolean, as their signature clearly shows.

However, our email filters are just another partial function application away. By fixing the integer parameter of minimumSize and maximumSize, we can create new functions of type EmailFilter:

1
2
val min20: EmailFilter = minimumSize(20, _: Email)
val max20: EmailFilter = maximumSize(20, _: Email)

Of course, we could achieve the same by partially applying our constr20 we created above:

1
2
val min20: EmailFilter = constr20(ge, _: Email)
val max20: EmailFilter = constr20(le, _: Email)

Spicing up your functions

Maybe you find partial function application in Scala a little too verbose, or simply not very elegant to write and look at. Lucky you, because there is an alternative.

As you should know by now, methods in Scala can have more than one parameter list. Let’s define our sizeConstraint method such that each parameter is in its own parameter list:

1
2
def sizeConstraint(pred: IntPairPred)(n: Int)(email: Email): Boolean =
  pred(email.text.size, n)

If we turn this into a function object that we can assign or pass around, the signature of that function looks like this:

1
val sizeConstraintFn: IntPairPred => Int => Email => Boolean = sizeConstraint _

Such a chain of one-parameter functions is called a curried function, so named after Haskell Curry, who re-discovered this technique and got all the fame for it. In fact, in the Haskell programming language, all functions are in curried form by default.

In our example, it takes an IntPairPred and returns a function that takes an Int and returns a new function. That last function, finally, takes an Email and returns a Boolean.

Now, if we we want to bind the IntPairPred, we simply apply sizeConstraintFn, which takes exactly this one parameter and returns a new one-parameter function:

1
2
val minSize: Int => Email => Boolean = sizeConstraint(ge)
val maxSize: Int => Email => Boolean = sizeConstraint(le)

There is no need to use any placeholders for parameters left blank, because we are in fact not doing any partial function application.

We can now create the same EmailFilter predicates as we did using partial function application before, by applying these curried functions:

1
2
val min20: Email => Boolean = minSize(20)
val max20: Email => Boolean = maxSize(20)

Of course, it’s possible to do all this in one step if you want to bind several parameters at once. It just means that you immediately apply the function that was returned from the first function application, without assigning it to a val first:

1
2
val min20: Email => Boolean = sizeConstraintFn(ge)(20)
val max20: Email => Boolean = sizeConstraintFn(le)(20)

Currying existing functions

It’s not always the case the you know beforehand whether writing your functions in curried form makes sense or not – after all, the usual function application looks a little more verbose than for functions that have only declared a single parameter list for all their parameters. Also, you’ll sometimes want to work with third-party functions in curried form, when they are written with a single parameter list.

Transforming a function with multiple parameters in one list to curried form is, of course, just another higher-order function, generating a new function from an existing one. This transformation logic is available in form of the curried method on functions with more than one parameter. Hence, if we have a function sum, taking two parameters, we can get the curried version simply calling calling its curried method:

1
2
val sum: (Int, Int) => Int = _ + _
val sumCurried: Int => Int => Int = sum.curried

If you need to do the reverse, you have Function.uncurried at your fingertips, which you need to pass the curried function to get it back in uncurried form.

Injecting your dependencies the functional way

To close this article, let’s have a look at what role curried functions can play in the large. If you come from the enterprisy Java or .NET world, you will be very familiar with the necessity to use more or less fancy dependency injection containers that take the heavy burden of providing all your objects with their respective dependencies off you. In Scala, you don’t really need any external tool for that, as the language already provides numerous features that make it much less of a pain to this all on your own.

When programming in a very functional way, you will notice that there is still a need to inject dependencies: Your functions residing in the higher layers of your application will have to make calls to other functions. Simply hard-coding the functions to call in the body of your functions makes it difficult to test them in isolation. Hence, you will need to pass the functions your higher-layer function depends on as arguments to that function.

It would not be DRY at all to always pass the same dependencies to your function when calling it, would it? Curried functions to the rescue! Currying and partial function application are one of several ways of injecting dependencies in functional programming.

The following, very simplified example illustrates this technique:

1
2
3
4
5
6
7
8
9
10
11
12
case class User(name: String)
trait EmailRepository {
  def getMails(user: User, unread: Boolean): Seq[Email]
}
trait FilterRepository {
  def getEmailFilter(user: User): EmailFilter
}
trait MailboxService {
  def getNewMails(emailRepo: EmailRepository)(filterRepo: FilterRepository)(user: User) =
    emailRepo.getMails(user, true).filter(filterRepo.getEmailFilter(user))
  val newMails: User => Seq[Email]
}

We have a service that depends on two different repositories, These dependencies are declared as parameters to the getNewMails method, each in their own parameter list.

The MailboxService already has a concrete implementation of that method, but is lacking one for the newMails field. The type of that field is User => Seq[Email] – that’s the function that components depending on the MailboxService will call.

We need an object that extends MailboxService. The idea is to implement newMails by currying the getNewMails method and fixing it with concrete implementations of the dependencies, EmailRepository and FilterRepository:

1
2
3
4
5
6
7
8
9
10
object MockEmailRepository extends EmailRepository {
  def getMails(user: User, unread: Boolean): Seq[Email] = Nil
}
object MockFilterRepository extends FilterRepository {
  def getEmailFilter(user: User): EmailFilter = _ => true
}
object MailboxServiceWithMockDeps extends MailboxService {
  val newMails: (User) => Seq[Email] =
    getNewMails(MockEmailRepository)(MockFilterRepository) _
}

We can now call MailboxServiceWithMoxDeps.newMails(User("daniel")) without having to specify the two repositories to be used. In a real application, of course, we would very likely not use a direct reference to a concrete implementation of the service, but have this injected, too.

This is probably not the most powerful and scaleable way of injecting your dependencies in Scala, but it’s good to have this in your tool belt, and it’s a very good example of the benefits that partial function application and currying can provide in the wild. If you want know more about this, I recommend to have a look at the excellent slides for the presentation ”Dependency Injection in Scala” by Debasish Ghosh, which is also where I first came across this technique.

Summary

In this article, we discussed two additional functional programming techniques that help you keep your code free of duplication and, on top of that, give you a lot of flexibility, allowing you to reuse your functions in manifold ways. Partial function application and currying have more or less the same effect, but sometimes one of them is the more elegant solution.

In the next part of this series, we will continue to look at ways to keep things flexible, discussing the what and how of type classes in Scala.