The Brutal Truth About Agentic AI and Integration Debt
I’ve been privileged over the past year to step into a broader technology landscape. Less time anchored to a single platform, more time listening. Listening to partners, yes, but far more to customers and the business markets they operate in. Different industries, different pressures, same underlying tension. Complexity is rising faster than most organisations can absorb it.
That context matters. The conversation around agentic AI is accelerating, yet the narrative on stage is far more coherent than the systems underneath it.
I’ve seen a lot of keynotes this year. The pattern is hard to miss.
As Satya Nadella puts it, “Every company will be an AI-first company.” On stage, that future feels fully formed.
Watch enough enterprise tech keynotes and the message becomes familiar.
AI agents are here.
They reason.
They act.
They unlock abundance in a world constrained by people, time, and budget.
Salesforce talks about Agentforce and systems of context, work, agency, and engagement. Microsoft talks about Copilot everywhere. Cloud providers talk about choice. Data platforms remind us that none of this works without trusted, governed data.
None of this is wrong.
But it is incomplete.
The diagrams are neat. Layered stacks. Clear ownership. Everyone politely staying in their lane. They suggest the future enterprise is something you can architect once and then scale.
That is not what most customers are experiencing.
What it feels like on the ground
Step off the stage and into an actual business and the picture changes quickly.
Most enterprise organisations are not choosing between platforms. They already have all of them. CRM, ERP, collaboration tools, data platforms, integration layers, and a long tail of SaaS that nobody quite owns but nobody can switch off.
What they want is not another interface or another pilot.
They want fewer steps, fewer handoffs, and less time spent feeding systems instead of doing the work.
When customers talk about where AI could help, they do not ask for agents. They ask questions like:
Why does this still take three tools and five approvals?
Why does the data exist but never seem usable when it matters?
Why are my best people stuck updating systems instead of creating value?
That gap between the stage story and operational reality is where this entire conversation really sits.
The part that doesn’t fit the diagram
Alongside the enterprise agent narrative, something more disruptive is quietly happening.
LLM-first tools are not positioning themselves as better CRM features or smarter workflow engines. They are positioning themselves as the place work starts.
You ask.
They reason.
They act across tools.
In that model, systems like Salesforce, ServiceNow, or HubSpot do not disappear. But they change role. They become systems of record and execution, not destinations.
If a user can ask an AI to summarise their pipeline, prepare a meeting brief, chase follow-ups, and update records without ever opening the CRM, the value of the interface collapses. The system still matters. The seat matters less. That has implications for seat-based pricing models.
I wrote more on how SaaS platforms are adjusting to that pressure here.
That tension is rarely acknowledged on keynote stages, but customers feel it instinctively.
Why Salesforce is investing in Slack and Informatica
Seen through this lens, Salesforce’s recent moves make more sense.
Investing heavily in Slack is not just about collaboration. It is about interface control.
If work is increasingly expressed conversationally, then the most valuable real estate is where people already spend their day. Slack gives Salesforce a surface where humans and agents can interact without forcing users back into traditional application UIs. It is a hedge against interface bypass.
The same logic applies to data.
The Informatica acquisition is not really about adding another product. It is about pulling data quality, lineage, and governance closer to where agents reason and act. If AI quality depends on trusted context, Salesforce cannot afford to treat that layer as someone else’s problem.
Taken together, this is a deliberate attempt to sit across interface, data, execution, and agency.
Between reasoning and execution
These changes put traditional integration and automation platforms in an uncomfortable position.
Tools like MuleSoft, Workato, UiPath, and Boomi were built for a world where systems were stable, workflows were predefined, and humans decided what happened next.
Agentic AI disrupts that model.
From above, LLMs can reason across steps and choose tools dynamically. From below, platforms are embedding native automation and agents directly into their products.
The middle layer is under pressure. But most enterprises still depend on it.
Why execution still needs a middle layer
Despite the hype, AI does not really learn in real time. Not yet.
Most enterprise AI systems infer, retrieve, and adapt within guardrails. But when you want them to genuinely improve, you are usually looking at a rework.
A prompt change.
A policy tweak.
A workflow adjustment.
A data model update.
A redeployment cycle.
Each uplift is intervention, not evolution.
That is why orchestration still matters.
Enterprises still need deterministic execution, versioned logic, rollback paths, and clear separation between reasoning and doing. Agents may decide what should happen, but something still needs to make it happen reliably.
The middle layer does not disappear. It becomes more constrained, more governed, and less visible.
What is actually broken
What is broken is not AI.
What is broken is the assumption that enterprise systems stitch together cleanly in real time.
APIs help.
Standards help.
Platforms promise composability.
But under pressure, the seams show.
Data semantics drift.
Permissions misalign.
Upstream changes break downstream logic.
Vendors blame implementation.
Customers absorb the complexity.
Agentic AI does not fix this. It exposes it faster.
Driving AI from non-native platforms introduces fragility around context, permissions, and accountability. Embedding AI natively improves safety, but limits flexibility. Every vendor is making a rational bet. None of them can cover the whole surface area.
The real battle line
At this point, it helps to name the thing sitting underneath all of this.
The value customers are chasing can be reduced to a simple equation:
V = (C × A) / F
Context is trusted, governed, usable data.
Agency is the ability to actually act, not just suggest.
Friction is everything that slows humans down or forces them to work around the system.
Every keynote, acquisition, and agent demo is an attempt to rebalance that equation.
Salesforce invests in Informatica because losing control of context collapses the numerator. They invest in Slack because if interaction moves elsewhere, friction spikes and value leaks. The middle-layer platforms survive because they are still the safest way to turn intent into agency without chaos.
Meanwhile, LLM-first tools attack the denominator.
They win by collapsing friction. If a user can get most of the outcome with a fraction of the effort, the equation tilts in their favour, even if governance is weaker and the edges are rougher.
This is not agents versus agents.
It is context versus convenience.
Control versus speed.
Governance versus momentum.
The rise of agentic debt
A new kind of debt is forming.
Not technical debt, but agentic debt.
Hundreds of small agents, automations, and shortcuts, each individually helpful, collectively opaque. When something goes wrong, a discount applied twice, conflicting emails sent, a customer action reversed, nobody quite knows which agent acted, which system authorised it, or who owns the blast radius.
The defining challenge of the next phase is not model capability. It is managing autonomous intent without losing accountability.
Epilogue: How to make the call in 2026
A note for the CIO or COO.
You do not back a platform. You back a constraint profile.
For high-stakes core processes like revenue, finance, or compliance, bias toward protecting Context. Native, governed AI will feel slower, but confidently wrong agents are too expensive to tolerate.
For knowledge work and innovation, bias toward collapsing Friction. Let teams use LLM-first tools. Accept fuzzier context in exchange for speed and learning.
Do not remove your integration layer. Re-task it. Its role is no longer to connect apps, but to act as the safety valve for Agency. When an agent decides to act, execution should be deterministic, reversible, and observable.
The organisations that win will not eliminate friction entirely or lock everything down in the name of safety.
They will understand the equation well enough to move deliberately along it, knowing exactly which compromises they are making, and why.
That is the decision the diagrams do not show.
And it is the one leaders are already being forced to make.






