Here at Dwolla, we strongly believe in building partnerships with organizations that define the state of the art and can help make us better.
A key component of Dwolla’s service delivery strategy has been its reliance on Amazon Web Services (AWS). This year AWS held their inaugural re:Inforce conference, a security-focused spin-off of the annual re:Invent conference that attracts tens of thousands of engineers to Las Vegas every year. AWS re:Inforce was a smaller event, but still had more than 5,000 people in attendance, with just about every security vendor you could name represented. It was a great opportunity for Dwolla to dive deeply into the technologies AWS makes available to better protect the services we offer our clients.
Though attending all of the talks would take a small army, I was able to sit in on a good cross section. I had some key takeaways regarding industry trends and best practices that Dwolla will use to evaluate and evolve our service offerings. We feel strongly about sharing our lessons learned with the broader Dwolla ecosystem (and beyond), and so this post shares those takeaways.
Serving Up Serverless
Dwolla has invested heavily in the use of serverless technologies wherever possible. Serverless is the concept of making use of third-party services to remove much of the traditional IT management burden from the equation. Good examples of this include AWS Lambda, Google AppEngine, Azure Functions, or Heroku.
Serverless architectures can reduce attack surface and administrative overhead, enable infrastructure-as-code and simplify evaluating compliance with relevant requirements. It’s clear that AWS views this as the way of the future. Many of the talks at re:Inforce focused on the ways AWS has assembled pipelines out of their own services instead of standing up dedicated servers.
Much of the tech industry has realized the marketability of making security and privacy front and center in the face of all types of attacks, including those by advanced persistent threats such as national governments. Apple has made privacy a core value proposition for many of its consumer products and companies such as Cloudflare and Google have gone out of their way to incorporate privacy protections into their services. One expression of this at AWS appears to be their intention to “encrypt everything.” AWS CISO Stephen Schmidt expressed this in the keynote address, and it was evident in many other talks.
All Encrypted Everything
Underpinning this effort is the open source AWS cryptography library s2n, which aims to provide formally verified cryptographic functionality for AWS services independent from the problems that have plagued OpenSSL. Similarly, in 2017 AWS announced the creation of the Nitro architecture. This is a fully in-house effort to develop trusted and verifiable hardware for use in AWS data centers. This has since allowed AWS to treat commodity mainboards and processors—which have also had a rough couple of years—as untrusted devices, and build systems such as split-key distribution channels to minimize or eliminate the opportunity for supply-chain attacks on their hardware.
Two direct expressions of the effort to encrypt everything are the new abilities to encrypt Elastic Block Store (EBS) volumes by default (encryption at-rest), and for all traffic within a Virtual Private Cloud (VPC) on supported instance types to be encrypted by default (encryption in-transit). These things both remove a large burden from users of AWS’ services when it comes to properly implementing encryption and managing associated keys.
A related effort are the improvements AWS has made to their CloudHSM service to make it highly-available while preserving the high level of assurance around access and tamper resistance. This service can optionally back the AWS Key Management Service (KMS), or other apps using the CloudHSM SDK, to FIPS 140-2 certified hardware security module (or a cluster of them). For customers with very high compliance requirements, or who wish to remove all notion of trusting AWS with their data, this might be an attractive option.
Another pair of features might simplify the task of properly segmenting systems within the AWS environment to compartmentalize risk and interactive (i.e. human) access. Creating logical separation between systems and services of different sensitivities (i.e. Development vs Production, IT vs Accounting, HIPAA vs PCI DSS, Client A vs Client B) is a long-standing network security practice, but it has been difficult to do cleanly in cloud services without creating a lot of administrative overhead. AWS is built around Accounts, each of which may contain large numbers of systems and services in regions around the world, each region being split up into availability zones that maintain diversity of location and connectivity.
AWS has now created the concept of Organizations that will allow customers to organize multiple accounts into a hierarchical structure that is centrally managed and monitored. AWS Control Tower automates the process of setting up new accounts with pre-defined, “baked-in” security controls and default service quota request templates. Through the use of Service Control Policy whitelists and blacklists, customers can control which capabilities are available at each level of the organization hierarchy. For example, not allowing the use of non-US regions or only allowing the use of services that are within the scope of AWS’ PCI DSS report.
Other policies could enforce the presence of security or IT management access in all accounts, or prevent other modifications to the account that wouldn’t comply with organizational policies and standards. This also allows for the centralized control of CloudTrail and CloudWatch configurations, which can enforce logging policies across all accounts and ensure comprehensive collection.
Infrastructure as Code and Beyond
A major shift over the past five years in many organizations has been towards infrastructure as code. Enabled by virtualization and cloud services, this is the practice of automating the creation of systems and services through configuration files that are handled, versioned and reviewed just like application code. This has been a core part of Dwolla’s product strategy.
AWS supports this for many of their services through the use of CloudFormation and Config. However, AWS is now pushing this concept further, towards the idea of governance, risk and compliance as-code.
AWS jointly presented with the CEO of their external auditor, CoalFire, on how AWS is moving their risk and compliance programs away from static analysis and sample-based testing and towards comprehensive, continuous, and automated testing. To do so requires a high level of consistency through the environment in terms of the ability to pull relevant data and analyze it in an automated fashion. It also requires advanced levels of data management and analysis.
AWS discussed the ways they are using data collected from their environment to establish provable security—the process for formally and mathematically showing that they are 100% in compliance with stated objectives. For comparison, the current state of the art in auditing is based around inspecting statistically-significant samples of the overall population and drawing inferences from that testing.
In another session (not yet posted to YouTube), AWS discussed how they are using graph databases to store information about both technical and non-technical (e.g. employee training) elements of their compliance program to track compliance over time and identify outliers requiring attention.
Lastly, we had a chance to see the ways AWS is incorporating additional options for visibility into the current state of the environment. There are many ways to approach this problem, even within the AWS ecosystem and even only using AWS services. That actually can be part of the challenge—understanding where to start and how to build a pipeline to collect, transform and analyze data.
Several services were repeatedly mentioned as being crucial to these efforts, however. CloudTrail allows for the collection of audit logs regarding all activity taking place in the AWS environment by users or service accounts. CloudWatch collects logs, metrics and events relevant to system and application performance. GuardDuty analyzes activity taking place in an AWS account and flags anomalous activity that might be malicious. Macie detects potentially sensitive information within AWS accounts. These are now all combined into the SecurityHub service, which collects this information, flags high-criticality issues for follow-up and can integrate with external systems for notification and incident tracking.
Dwolla makes use of a combination of services from AWS and Cloudflare to provide a secure and high-performance platform for our customers. Both have demonstrated ongoing commitments, not only to service quality, but to protecting security and privacy on the internet. We look forward to continuing to partner with companies like these going forward and making use of exciting new services and technologies as they become available and fit within our risk and compliance profiles.