Three months, low adoption, and a clearer picture of what an MCP server should actually be.
In August, we launched a Model Context Protocol (MCP) server for Dwolla. We built an MVP implementation that exposes our API endpoints through MCP with tools like retrieving customers, listing transfers and managing funding sources. With guidance through crafting the right query with context, it worked and was relatively effective.
Three months later, we haven't seen the adoption we expected. But that lack of adoption taught us something valuable about what developers actually need from an MCP server.
Our MCP server provides programmatic access to Dwolla API operations. For developers already using our SDKs and API (which most Dwolla integrators are), this added a new protocol without solving a new problem.
The insight we've gained: An MCP server's value isn't in providing API access—it's in encoding expertise and solving complete workflows.
Let’s dive into this a bit more.
API Access (what we built):
Intelligent Assistance (what we're exploring):
The second approach solves a problem. The first just provides access to data.
After reflecting on our experience and studying how developers actually work with Dwolla, we believe a valuable MCP server should:
Instead of forcing LLMs to orchestrate multiple low-level operations:
create_customer(data)
create_funding_source(customer_id, bank_data)
initiate_micro_deposits(funding_source_id)
// ...wait 1-2 days...
verify_micro_deposits(funding_source_id, amounts)
create_transfer(source, destination, amount)
This requires knowing the exact sequence, handling timing between steps, and managing state across multiple API calls.
Provide a single workflow-based tool that handles the complete scenario:
generate_funds_flow("receive-money")
→ Creates customer, verified funding source,
initiates test transfer, simulates transfer lifecycle
This encapsulates the entire workflow, understanding dependencies and timing, so developers can test end-to-end flows immediately without manually creating each piece.
A tool that just calls GET /transfers/{id} isn't much help. But a tool that retrieves the transfer, checks both customers, examines funding sources, searches documentation for the specific error code and explains in plain English what went wrong and how to fix it—that's valuable.
This is where search_documentation becomes crucial. It's not just another tool; it's what enables other tools to be intelligent. Tools can search docs in real-time to provide context-aware responses.
Our initial MCP server exposed 40+ tools mapping to individual API endpoints. In practice, this created problems:
Token waste: The LLM considers every available tool for each request. With 40+ options, most tokens are spent evaluating tools that will never be used for the current task.
Cognitive overload: LLMs don't want to wade through dozens of granular operations. They want to describe their goal and get results.
Increased hallucinations: More tools mean more opportunities for the LLM to choose incorrectly or combine tools in ways that don't make sense.
A focused set of 5-10 workflow-based tools eliminates this overhead. Each tool has a clear purpose, reducing ambiguity and helping both the LLM and the user understand exactly what's available.
We're working on a new approach that provides intelligent, workflow-oriented tools. Here are some examples:
Each of these tools combines multiple API calls with documentation, expertise and contextual intelligence to solve a complete problem.
We're exploring building our next iteration as a remote MCP server—meaning it runs as a service that developers connect to, rather than installing locally. This gives us several advantages:
Always Current: Documentation search reflects the latest API changes without requiring developers to update their local installation.
Contextual Intelligence: Tools can reference documentation in real-time and combine multiple data sources to provide richer, more accurate responses.
Reduced Installation Friction: No local dependencies, version conflicts, or installation troubleshooting—just connect and start building.
Improved Over Time: We can enhance tools and toolsets based on usage patterns, and every developer immediately benefits from improvements.
We have a direction, but we need input from real developers to prioritize what to build first and ensure we're solving actual problems.
About Testing & Integration:
About Debugging:
About Documentation:
About Operations:
About the Approach:
Our original MCP server provided API access. That's not entirely wrong; however, it's just not enough to compel adoption.
What we're learning is that MCP's real value emerges when AI can reason about complete workflows, not just execute isolated operations. It's about enabling Claude or other AI assistants to act as intelligent integration partners—not just "here's the API," but "here's how to solve your problem using the API."
We're building tools that combine API capabilities with documentation, best practices, and integration expertise. Tools that handle complete workflows instead of individual operations. Tools that explain why something is happening, not just what is happening.
This is what we think a better MCP server looks like. But we're not building it in isolation.
Help Us Build This
If you're integrating with Dwolla—or thinking about it—we'd love your input. We're looking for developers who want to:
We're not looking for extensive time commitments—even a 20-minute conversation about your integration experience would be incredibly valuable.
Interested? Email us at developers@dwolla.com
Tell us a bit about what you're building with Dwolla (or planning to build), and what would make your integration easier. We'll follow up to learn more about your needs and get you early access.
Let's build something together!