At Dwolla, we are always trying to improve our platform by exploring different technologies that we haven’t had a chance to implement. We believe our platform is never done. Keeping this core belief in mind, I recently took some personal development time to investigate Cloudflare Workers to better understand how they could be used to improve our platform. I’d like to share some of the things I learned and show some ways you could use Workers to improve your systems!

Why Cloudflare Workers?

Cloudflare Workers is a technology that allows you to run JavaScript code at any Cloudflare geographically distributed data center. You can use the JavaScript to modify your site’s HTTP requests or responses, make parallel requests during a call or generate responses directly inside the Worker. By using their worldwide data centers, the Worker will run on the Cloudflare edge, which allows for very fast responses due to the Worker being run much closer to the requesting client than your site’s origin. Cloudflare also offers a global, low-latency, key-value store that can be paired with Workers to do some really interesting things, like ultra-fast caching, access token verification or blocking malicious users.

We are big fans of Cloudflare at Dwolla. We use many of their features to enhance and help scale our platform. After hearing about some of the speed claims, I investigated Cloudflare Workers to see how fast I could return cached Labels data from the Dwolla Platform. The answer, it turns out, is mind-bogglingly fast. But we’ll get to that later. Let’s see how you can get started writing your own Workers.

Getting Started

As I mentioned before, Cloudflare Workers are just JavaScript.To serve a static response to a request, you only need 3 lines of code to manage it:

addEventListener('fetch', event => {
  event.respondWith(new Response('hello world'))

To run the code and see your new blazingly-fast response, you will need to deploy your Worker code to Cloudflare’s edge using the Cloudflare dashboard. The Cloudflare dashboard allows you to test and edit your Workers, as well as manage and edit all the resources tied to your workers, including any key-value stores you are using to manage data. The easiest way to deploy your Worker code is to use the development environment inside the Cloudflare dashboard. Cloudflare already has a great write-up of how to do that here.

Cloudflare also makes an API available to deploy both your Worker script and to interact with the resources tied to your Workers. You can also use the API to store data directly into your managed key-value stores from a process outside of your Workers.

Now that you have a basic Worker deployed to the Cloudflare edge, you probably want to give that Worker some more advanced functionality. In my case, I wanted to use this Worker to call the Dwolla Platform and store Labels balance information. One of the really cool things in Workers is how easy it is to make calls out to other APIs. The Workers use the “async/await” syntax to work with JavaScript Promises, and since they are modeled after the Service Workers available in modern web browsers, you can use the Fetch API to easily call other APIs and return the results as a Promise.

addEventListener('fetch', event => {

async function handleGetLabelBalance(request) {
  return fetch(‘’, {
    method: ‘GET’,
    headers: {
      'Authorization': `Bearer 678abc910def`,
      'Accept': 'application/vnd.dwolla.v1.hal+json'

For me, this meant I could easily fetch any requested Labels data from the Dwolla Platform and cache it in a key-value store to quickly serve on the next request. Cloudflare had some really great resources that walked through some more of these advanced scenarios, including blog posts or some recipes.

Tips & Tricks

Using environment variables

If you want to connect to an authenticated API from your Worker, you might need a key and secret to call that API. You have a couple options here:

  1. Store them in a Cloudflare KV store using the API
  2. Inject them into your script using Webpack

While testing out my project I decided to use environment variables with Webpack to add them to my script file during deploy. Adding a .env file to your repository is a good way to store environment variables specific to your project. There are many libraries available in JavaScript to load and manage those files. I created a cloudflare.env file to store my variables and made sure to add that to my .gitignore so I didn’t accidentally check it in and expose my secret values. After that, I added a new plugin to my webpack config file that managed my variables and used the dotenv NPM library to require my cloudflare.env file. Here is the webpack.config.js file I used:

require('dotenv').config({ path: __dirname + '/cloudflare.env' });

const webpack = require('webpack');

module.exports = {
    entry: __dirname + '/src/index.js',
    output: {
        path: __dirname + '/dist',
        filename: 'index.js',
    target: 'webworker',
    mode: 'production',
    optimization: {
        minimize: false,
    performance: {
        hints: false,
    plugins: [
        new webpack.DefinePlugin({
            LAST_MODIFIED: JSON.stringify(new Date().toJSON()),
            DWOLLA_CLIENT_ID: JSON.stringify(process.env.DWOLLA_KEY),
            DWOLLA_CLIENT_SECRET: JSON.stringify(process.env.DWOLLA_SECRET)

Now I can use the variables like DWOLLA_CLIENT_ID & DWOLLA_CLIENT_SECRET in my Worker file to connect to the Dwolla API and get a Label balance. One thing to note is that using this process does put the environment variable values directly into the Worker script so if someone has access to your Cloudflare Dashboard, they could look at the script source and see the values.

Deploying Workers in a CI pipeline

The ability to deploy a Worker in the Cloudflare dashboard is great when you are first testing some code and want to iterate quickly. However, at some point you might want to start deploying your Worker regularly when you make changes and build that deployment into a CI pipeline. Fortunately, Cloudflare makes this really easy with their Configuration API. You can upload a JavaScript file by making a PUT request to the /scripts endpoint, and then create a route that your Worker will respond to by making a POST request to the /filters endpoint. After the file is uploaded and connected to a route, your Worker will begin responding to any traffic that matches the route you created. If you make changes to your Worker’s code, simply upload the changed file to the /scripts endpoint and your changes will be globally distributed in seconds.

Here is some example JavaScript to deploy a Worker file and associate it with a specific route. We can also use the cloudflare.env file we added earlier to store our Cloudflare email address and key to avoid adding those directly to our code.

require('dotenv').config({ path: __dirname + '/cloudflare.env' });

const { AUTH_EMAIL, AUTH_KEY, ZONE_ID } = process.env;

const formData = new FormData();

const script = fs
.readFileSync(__dirname + '/../dist/index.js', 'utf8')

formData.append('script', script);

let result = await fetch(`${ZONE_ID}/workers/scripts`, {
            method: 'PUT',
            headers: {
                'X-Auth-Email': AUTH_EMAIL,
                'X-Auth-Key': AUTH_KEY
            body: formData

const responseBody = await result.json();
if (responseBody.success) {
return`Worker upload successful 🚀`);
} else {
return console.error('Error uploading worker!');

result = await fetch(`${ZONE_ID}/workers/filters`, {
            method: POST,
            headers: {
            	'X-Auth-Email': AUTH_EMAIL,
            	'X-Auth-Key': AUTH_KEY
            body: JSON.stringify({
pattern: ‘*’,
        		enabled: true

Deploying Workers is also supported by many existing tools, including the Serverless Framework, Terraform or GitHub Actions. Cloudflare recently made a command-line tool for deploying Workers, referred to as Wrangler, widely available to help ease some of the difficulty of managing deploys in code. Read more about using Wrangler or check out the source code.

Using the Cloudflare Key-Value store

As I mentioned before, the Cloudflare Key-Value store is a globally distributed datastore you can use to power your Workers. You can read and write data from inside a Worker or you can use the Cloudflare API to load data into your KV store and read it from inside a Worker. The KV store is designed to be eventually consistent, which means that when you write a value to the KV store it might not be returned the first time you attempt to read it, but it will eventually be returned in subsequent reads. This is due to the fact that the KV stores are globally distributed, so the data you write must be replicated across the world. This allows the KV stores to scale to handle massive traffic but it also means your application must be prepared to handle this type of situation. Because of this design, the KV stores might be best used for high-read, low-write situations.

Cloudflare KV stores are organized by namespaces with a namespace being a container for key-value pairs in your account. To create a new namespace called TOKENS through the API, you make a POST request to the namespaces endpoint:

curl “$ACCOUNT_ID/storage/kv/namespaces" \
-H "X-Auth-Email: $AUTH_EMAIL" \
-H "X-Auth-Key: $AUTH_KEY" \
-H "Content-Type: application/json" \
--data '{"title": "TOKENS"}'

After you bind your TOKENS namespace to a Worker using the Dashboard or API, you can use the namespace in your Worker to read and write data:

const token = ‘12334551abc’;
await TOKENS.put(token, JSON.stringify( { “expiresIn”: 60 } ));

let data = await TOKENS.get(token, ‘json’);

Using Third-party libraries

Cloudflare Workers come with some built-in JavaScript libraries that allow you to make HTTP requests and handle Promises. However, a lot of times your Worker will need functionality that someone else has already written and you want to pull that code in from an existing package that lives in a repository like NPM. No problem. Use Webpack to distribute those libraries with your Worker! Cloudflare has a great write-up on how to integrate Webpack with your Worker. Be cautious of how many packages you include–the more you add, the lengthier the start time can be for your Worker during a cold start. Additionally, the inclusion of third-party code can introduce unknown vulnerabilities to your code if they are not managed appropriately. Therefore, be judicious when choosing which code to include.

Lessons Learned

Recapping the initial goal of my personal development time–determine if Workers were as fast as claimed and if they could be a valuable tool to help scale the Dwolla platform.

The answers to both questions was a resounding YES.

I was able to retrieve cached Label data on an average of less than 50ms and saw cold start times for a Worker of less than 200ms. Retrieving the cached Label data included validating a Bearer token against a KV store and getting the cached Label data out of another KV store. I could take some time to restructure my cache or possibly validate the tokens using batch proofs to drop that 50ms even further with fewer calls to the KV stores. I could also remove some third-party libraries I don’t really need to get the cold start time even lower. However, for a quick proof-of-concept project, I am extremely happy and impressed with the performance I saw.


Cloudflare Workers are a powerful way to globally distribute endpoints of your API and to get your code closer to your clients. You can use Workers to handle authentication for your API, cache expensive results or serve and deliver static files quickly. I had a lot of fun exploring Workers as a proof-of-concept for the Dwolla Platform and learned some lessons that can hopefully save you some time if you decide Workers are a good option for your platform.

Start building in our sandbox for free, right now. Get a feel for how our API works before going live in production.

Stay Updated