What We Learned Building a Dwolla MCP Server (And What We’re Exploring)
Three months, low adoption, and a clearer picture of what an MCP server should actually be.
In August, we launched a Model Context Protocol (MCP) server for Dwolla. We built an MVP implementation that exposes our API endpoints through MCP with tools like retrieving customers, listing transfers and managing funding sources. With guidance through crafting the right query with context, it worked and was relatively effective.
Three months later, we haven't seen the adoption we expected. But that lack of adoption taught us something valuable about what developers actually need from an MCP server.
What We Built vs. What Developers Need
Our MCP server provides programmatic access to Dwolla API operations. For developers already using our SDKs and API (which most Dwolla integrators are), this added a new protocol without solving a new problem.
The insight we've gained: An MCP server's value isn't in providing API access—it's in encoding expertise and solving complete workflows.
Let’s dive into this a bit more.
The Difference Between Access and Assistance
API Access (what we built):
- Tool: retrieve_customer
- Returns: Customer data
- The LLM still needs to: Know what to look for, understand the response and determine next steps with limited context
Intelligent Assistance (what we're exploring):
- Tool: simulate_transfer_lifecycle
- Input: transfer type, amount, source, destination
- Returns:
- Shows each state transition with timing
- Simulates processing delays
- Can force different outcomes (success, failure, cancellation)
- Shows webhook events at each step
- Explains what's happening "behind the scenes"
- Combines: Multiple API calls + documentation + expertise + actionable guidance
The second approach solves a problem. The first just provides access to data.
What Makes a Better MCP Server?
After reflecting on our experience and studying how developers actually work with Dwolla, we believe a valuable MCP server should:
1. Solve Complete Problems, Not Just Expose Endpoints
Instead of forcing LLMs to orchestrate multiple low-level operations:
create_customer(data)
create_funding_source(customer_id, bank_data)
initiate_micro_deposits(funding_source_id)
// ...wait 1-2 days...
verify_micro_deposits(funding_source_id, amounts)
create_transfer(source, destination, amount)
This requires knowing the exact sequence, handling timing between steps, and managing state across multiple API calls.
Provide a single workflow-based tool that handles the complete scenario:
generate_funds_flow("receive-money")
→ Creates customer, verified funding source,
initiates test transfer, simulates transfer lifecycle
This encapsulates the entire workflow, understanding dependencies and timing, so developers can test end-to-end flows immediately without manually creating each piece.
2. Combine API Calls with Documentation and Context
A tool that just calls GET /transfers/{id} isn't much help. But a tool that retrieves the transfer, checks both customers, examines funding sources, searches documentation for the specific error code and explains in plain English what went wrong and how to fix it—that's valuable.
This is where search_documentation becomes crucial. It's not just another tool; it's what enables other tools to be intelligent. Tools can search docs in real-time to provide context-aware responses.
3. Handle Common Workflows End-to-End
Our initial MCP server exposed 40+ tools mapping to individual API endpoints. In practice, this created problems:
Token waste: The LLM considers every available tool for each request. With 40+ options, most tokens are spent evaluating tools that will never be used for the current task.
Cognitive overload: LLMs don't want to wade through dozens of granular operations. They want to describe their goal and get results.
Increased hallucinations: More tools mean more opportunities for the LLM to choose incorrectly or combine tools in ways that don't make sense.
A focused set of 5-10 workflow-based tools eliminates this overhead. Each tool has a clear purpose, reducing ambiguity and helping both the LLM and the user understand exactly what's available.
What We're Exploring: Workflow-Based Tools
We're working on a new approach that provides intelligent, workflow-oriented tools. Here are some examples:
For Integration & Testing
- generate_funds_flow - Creates complete test scenarios (send money, receive money, facilitate payments) with all entities properly set up
- simulate_transfer_lifecycle - Walks through an entire transfer flow showing each state transition, timing, and webhook events
- simulate_business_verification - Sets up a business customer with beneficial owners and documents, showing the complete verification process
For Operations & Debugging
- debug_transfer_issue - Analyzes a failed transfer by checking customer status, funding sources, and balance; explains the error in plain English with remediation steps
- diagnose_verification_issue - Explains why a customer is stuck in verification and what's needed to proceed
- replay_missed_webhooks - Finds webhooks that failed delivery and replays them
For Understanding Dwolla
- search_documentation - Semantic search that provides context to both developers and other tools
- explain_concept - Interactive explanations of Dwolla concepts (customer types, verification, beneficial ownership, etc.)
- compare_customer_types - Side-by-side comparison to help choose the right customer type
For Production Operations
- bulk_transfer_status - Check multiple transfers at once with filtering and analysis
- trace_money_flow - Follow funds through complex facilitated transfer chains
- monitor_mass_payment - Identify successful and failed items in a bulk payment
Each of these tools combines multiple API calls with documentation, expertise and contextual intelligence to solve a complete problem.
The Remote MCP Architecture
We're exploring building our next iteration as a remote MCP server—meaning it runs as a service that developers connect to, rather than installing locally. This gives us several advantages:
Always Current: Documentation search reflects the latest API changes without requiring developers to update their local installation.
Contextual Intelligence: Tools can reference documentation in real-time and combine multiple data sources to provide richer, more accurate responses.
Reduced Installation Friction: No local dependencies, version conflicts, or installation troubleshooting—just connect and start building.
Improved Over Time: We can enhance tools and toolsets based on usage patterns, and every developer immediately benefits from improvements.
Where We Need Your Help
We have a direction, but we need input from real developers to prioritize what to build first and ensure we're solving actual problems.
What Would Be Most Valuable to You?
About Testing & Integration:
- Would simulate_transfer_lifecycle save you significant time with testing your integration?
- What test scenarios do you repeatedly set up manually?
- How much time do you spend creating test data in the sandbox environment?
About Debugging:
- How often do you need to debug failed transfers or stuck verifications?
- What's your most common support question?
- What manual troubleshooting process takes the most time?
About Documentation:
- How much time do you spend searching Dwolla documentation?
- What questions do you ask most frequently?
- Would context-aware documentation search be valuable?
About Operations:
- Do you need tools for bulk operations (status checks, webhook replay, etc.)?
- What operational tasks are most tedious?
- What would you want an AI agent to handle for you?
About the Approach:
- Does this workflow-based approach resonate with your needs?
- What tools are we missing from the list?
- Any concerns about a remote MCP server vs. local installation?
The Bigger Picture
Our original MCP server provided API access. That's not entirely wrong; however, it's just not enough to compel adoption.
What we're learning is that MCP's real value emerges when AI can reason about complete workflows, not just execute isolated operations. It's about enabling Claude or other AI assistants to act as intelligent integration partners—not just "here's the API," but "here's how to solve your problem using the API."
We're building tools that combine API capabilities with documentation, best practices, and integration expertise. Tools that handle complete workflows instead of individual operations. Tools that explain why something is happening, not just what is happening.
This is what we think a better MCP server looks like. But we're not building it in isolation.
Help Us Build This
If you're integrating with Dwolla—or thinking about it—we'd love your input. We're looking for developers who want to:
- Shape what gets built first: Tell us which workflows would save you the most time
- Test early versions: Get access before public release and influence the direction
- Share real problems: Help us understand where you actually get stuck
- Build alongside us: Your integration challenges inform our tool design
We're not looking for extensive time commitments—even a 20-minute conversation about your integration experience would be incredibly valuable.
Interested? Email us at developers@dwolla.com
Tell us a bit about what you're building with Dwolla (or planning to build), and what would make your integration easier. We'll follow up to learn more about your needs and get you early access.
Let's build something together!