At Dwolla, the platform team dedicates lots of time to writing tooling, making our teams’ lives easier. What’s particularly exciting is when those tools have wider application beyond the walls of Dwolla. Since our team spends lots of time with Scala microservices deployed using Docker, we’ve written several SBT plugins and helper libraries. These plugins help us manage our services, both in production and locally during development.

All of the projects described below have been released on GitHub using the MIT License—pull requests are welcome! Each project contains a Bintray badge in its README, linked to where its artifacts have been published in one of Dwolla’s Bintray repositories.

Docker Containers

Our Scala-based microservices are typically built into Docker images for deployment using the SBT Native Packager Docker plugin. This plugin builds the app, creates a Docker image containing it and its dependencies, and publishes the image to our Docker registry.

Once the image is published, we have other tooling that defines containers in one of our operational environments. Either for testing changes to the service itself, or for use with something that depends on it, it’s useful to be able to run the containers locally as well. Typically, our README files would contain an example docker create command so developers could create and run the container locally.

To automate that step and place the configuration in a common location, our Docker containers plugin adds SBT settings and tasks to control things like port publishing, memory limits, and environment variables. Now, instead of running a separate docker create command, developers can simply add docker:startLocal task to their SBT command and build the service, run its test suite, and start a local container, all in a single command.

sbt clean test docker:startLocal


This plugin currently supports the most common Docker container options needed for our services, but if it’s missing something you need, feel free to send a pull request!

CloudFormation Stacks

We treat infrastructure as code, so we use Amazon CloudFormation to deploy microservices and AWS Lambda functions.

Our Scala-based Lambda functions use a Scala CloudFormation template generator to create CloudFormation templates that define the Lambda function. Our CloudFormation Stack deployer plugin will read the generated JSON template and create or update a CloudFormation stack.

sbt clean test publish stack/deploy


We typically organize projects with a primary SBT project at the root of a repository, with a sub-project named stack containing the CloudFormation definition. This sub-project would have the CloudFormation plugin enabled, so it could be triggered using the stack/deploy syntax described above.

lazy val root = (project in file("."))
    name := "test-project"
  .settings(commonSettings: _*)
lazy val stack: Project = (project in file("stack"))
  .settings(commonSettings: _*)
    resolvers ++= Seq(Resolver.jcenterRepo),
    libraryDependencies ++= Seq(
      "com.monsanto.arch" %% "cloud-formation-template-generator" % "3.3.2"
    stackName := (name in root).value,
    stackParameters := List(
      "S3Bucket" → (s3Bucket in root).value,
      "S3Key" → (s3Key in root).value
  .settings(Defaults.itSettings: _*)


The stack sub-project needs to define a main object that accepts a command-line argument containing a filename to which the CloudFormation template JSON should be written. The plugin will run the stack app, generating the JSON file, and then submit the JSON to CloudFormation as needed. (This workaround is necessary because currently the CloudFormation template generator only supports Scala 2.11, and SBT only supports 2.10. Hopefully this will be resolved in a future version.)

For example:

package com.dwolla.cloudformation.example
import java.nio.charset.StandardCharsets
import java.nio.file.{Files, Paths}
import spray.json._
import scala.language.implicitConversions
object WriteTemplate extends App {
  val template = Stack.template()
  val outputFilename = Paths.get(args(0))
  Files.write(outputFilename, template.toJson.prettyPrint.getBytes(StandardCharsets.UTF_8))
import com.monsanto.arch.cloudformation.model.Template
object Stack {
  def template(): Template = ???


Using this configuration, the root project can be published (using our S3 publisher, described below) and deployed using CloudFormation. The S3 bucket and key to which the artifact was published are passed to the CloudFormation stack as parameters.

Publishing Artifacts to S3

We use the SBT Assembly plugin to build jars that can run on AWS Lambda. These jars need to be uploaded to S3 to be referenced by the CloudFormation stacks that define our Lambda functions, so we wrote this S3 Publisher plugin to facilitate that.

The plugin will take the S3 bucket from an environment variable, which we use to override the default value on our CI servers. That way, developers can publish development versions of their Lambda functions for use in our AWS Sandbox, but be prevented from publishing artifacts for production use. Only the CI server has the authority to write to the production artifact bucket.

Berkshelf Chef Dependencies

Most of our services run on one of our Amazon ECS Docker clusters, but occasionally we need to run services on standalone instances. This may require some additional configuration. We use Chef for that additional configuration, so in order to publish the Chef cookbooks and their dependencies, these services require a Berkshelf berks package step. The resulting file is then uploaded to S3 where our service provisioner can locate it.

To automate this and tie it into a single build tool, we wrote Berkshelf publisher plugin. The plugin adds berks:package and berks:publish tasks to sbt, along with appropriate configuration settings.

Each commit is verified by calling

sbt clean test docker:package berks:package


which runs all the tests, verifies a docker container can be assembled, and confirms dependencies can be packaged.

Services are published using

sbt clean test docker:publish berks:publish


Scala AWS Utilities

We published the Scala AWS utility code used by the CloudFormation and S3 plugins as a separate artifact, in case it might be useful outside the plugin. Currently, it contains the logic to create or update a CloudFormation stack (depending on whether one by the given name already exists), and an AsyncHandler implementation that completes a Scala Future with the AWS SDK method call results.

Scala Test Utilities

While building lots of Scala microservices, we have built up a number of Scala test utilities as well. These are also published as a separate artifact. Here are some of the features:

  1. Specs2 matcher for checking if scala.concurrent.blockin was invoked by the code under test
  2. Specs2 matcher for checking Promise completion
  3. Specs2 matcher for slices of collections
  4. Helper code to set and replace system properties during test runs
  5. Akka test support, to help ensure the actor system is terminated after each test, and that it is named in a way that is easy to identify in logging output
  6. An exception that will not output a stack trace and clearly indicates that it is intentionally thrown by a test case, to help clarify test output
  7. A that tracks whether or not it has been closed, along with a corresponding Specs2 matcher

Check out the project’s README, containing several detailed examples.
Using these tools, Dwolla manages several hundred CloudFormation stacks, defining several dozen microservices and AWS Lambda functions running across multiple operational environments. If you use Scala for your projects, hopefully your team will find these tools useful as well. Check out the GitHub repositories for the source code, or pull in the artifacts from BinTray!

To learn more, follow Brian on Twitter.


Stay Updated