Most PSPs weren't built for stablecoins. Hear from our founders on what they learned building one from scratch. Join us live May 21st →
AI as Infrastructure: An Engineering Journal from Modern Treasury
Explore how Modern Treasury applies AI to unify internal systems, streamline workflows, and improve operational efficiency across engineering, go-to-market, and customer success. This series covers practical applications of LLMs, natural language interfaces, and AI-driven infrastructure.
Explore With AI
Building with AI: An Inside-Out Approach
Over the last 12–18 months, AI has become a foundational layer for companies that want to operate with greater speed and scale.
At Modern Treasury, we’re treating it that way. We’re integrating AI across the company as a core part of how work gets done.
I lead the engineering team behind this effort, and our goal is straightforward: increase the speed, quality, and leverage of our organization so we can deliver better outcomes for our customers.
A major part of this work is rethinking how teams interact with the systems that run the company.
Historically, critical operational context has been fragmented across tools like Datadog, GitHub, Linear, Salesforce, Zendesk, Snowflake, Hex, Notion, Gong, Figma, and internal applications. Engineers, support teams, and operators often spend significant time navigating between systems, stitching together context manually, and translating information across workflows before they can take action.
We’re consolidating access to these systems into a unified interface powered by AI. Instead of jumping between dashboards, ticketing systems, databases, documentation, and logs, teams can retrieve, synthesize, and act on information through natural language with appropriate permissions, validation, and auditability.
A key part of this effort is building integrations between AI systems like Claude and Devin and the operational tooling that powers the company day to day.
Rather than treating AI as a separate destination, we’re embedding it directly into operational workflows.
AI is only as effective as the data and workflows it can access. When information remains fragmented across systems, most of the work still goes into gathering context instead of making decisions. By consolidating access to operational knowledge and workflows, teams can move directly to higher-leverage work.
This also requires rethinking how systems are structured internally. The largest gains come from reducing silos across engineering, operations, support, and go-to-market teams so information flows more directly throughout the organization.
This journal is the beginning of a series where we’ll share how we’re building toward that model and what we’re learning along the way.
Where we’re applying AI across the company
We’re rebuilding high-friction workflows across the company so that fewer steps require manual context gathering, synthesis, and coordination.
A recurring pattern across these efforts is reducing the need to move between systems. By connecting our internal tools and data sources into a shared operational layer, teams can retrieve and act on context through natural language instead of manually navigating multiple applications.
In practice, that includes:
1. Support and customer operations workflows that start with context
Support and operations teams often need to correlate information across Zendesk, transaction systems, internal tooling, Datadog logs, and customer history before diagnosing an issue.
We now generate structured operational context upfront by aggregating relevant transaction history, system behavior, prior incidents, and customer interactions into a single interface. This allows teams to move directly into diagnosis and resolution rather than spending time assembling context manually.
2. Engineering workflows connected directly to operational systems
Engineers can query internal documentation, GitHub repositories, Linear issues, observability systems, and operational data sources in natural language.
This shortens the loop between identifying an issue and implementing a fix, especially in environments with high domain complexity and large amounts of fragmented institutional knowledge.
We’re also experimenting with systems where AI agents like Claude and Devin can operate directly against internal tooling with scoped permissions and validation layers.
3. Go-to-market and business operations workflows with embedded context
Go-to-market and business operations teams rely heavily on systems like Salesforce, Gong, Snowflake, Hex, and Notion to understand customer behavior and operational performance.
Many of these workflows historically required significant manual synthesis across meetings, reporting systems, and internal documentation.
We’re automating large parts of that first-pass analysis layer so teams can generate structured reporting, identify customer patterns, summarize operational changes, and produce internal content with accurate context already attached.
Across all of these efforts, the pattern is consistent: reduce the amount of work required to get to a high-quality first pass, and let humans focus on validation, judgment, prioritization, and edge cases.
These systems are already embedded in day-to-day workflows across teams, and the impact is a steady compression of time, operational overhead, and context-switching across the organization.
Turning internal gains into customer outcomes
For a financial infrastructure company, internal systems are tightly coupled to customer experience. Improvements in how we operate do not stay internal. They directly shape how quickly and how well we can serve customers.
When internal workflows require less time to gather context and reach decisions, customer-facing workflows improve as a result:
- Faster internal workflows lead to quicker customer response times
- Better internal visibility leads to more accurate reporting and fewer blind spots
- Increased engineering velocity leads to shorter iteration cycles and faster delivery of product improvements
- Reduced operational overhead creates more capacity to focus on complex, high-impact customer needs
As adoption expands, this creates tighter feedback loops across the company, which results in a system that improves its own performance over time.
When support teams resolve issues faster, that data feeds back into product and engineering with less delay. When engineering ships improvements faster, operational workflows become simpler and more reliable. When internal data is easier to access and interpret, decision-making improves across every function.
This has direct implications for the product. Over time, this allows us to expand the scope of what we can solve. Instead of addressing isolated problems, we can take on broader operational challenges with greater speed and confidence.
In that sense, internal efficiency is a core driver of product quality, product velocity, and the range of outcomes we can deliver to customers.
What we’ll cover in this series
In upcoming posts, we’ll go deeper into how we build and deploy these systems across the company.
That includes:
- Architectures for integrating LLMs into operational workflows
How we structure systems that connect models like Claude and Devin to internal infrastructure and operational tooling. This includes orchestration layers, retrieval systems, context construction, validation pipelines, and guardrails for correctness and safety.
- Building a unified interface over fragmented systems
How we connect tools like Datadog, GitHub, Linear, Salesforce, Snowflake, Zendesk, Notion, Gong, Hex, and internal applications into a shared operational layer. We’ll cover how teams retrieve and act on information through natural language while maintaining strict permissioning, auditability, and operational controls.
- Build vs. buy decisions across the AI stack
Where we rely on external models and tooling, where we build internally, and how we think about control, reliability, cost, and long-term differentiation.
- Case studies from production workflows
Concrete examples where these systems drive measurable improvements across engineering, support, operations, and go-to-market workflows. This includes reductions in time-to-resolution, faster iteration cycles, increased throughput, and changes in how teams operate day to day.
- Failure modes and lessons learned
Where systems break down, what has not worked, and how we adapt as these systems move from experimentation into core operational infrastructure.
Across all of this, the goal is to share not just what we build, but how we make decisions, including what we prioritize, what we avoid, and where we believe the highest-leverage opportunities exist.
Get the latest articles, guides, and insights delivered to your inbox.
Authors








