BLOG. 3 min read
The Maturity Gap Holding Back AI in Asset Management
April 17, 2026 by Sabrina Bartha
At BVI Fund Operations Conference, Frankfurt, where SS&C presented on the state of AI adoption in asset management, one pattern was difficult to ignore. The industry is approaching an inflection point in artificial intelligence adoption. Across the sector, firms are experimenting with AI at unprecedented speed, yet very little of that experimentation is translating into measurable operational value. The gap is not between ambition and technology. It is between ambition and execution.
The Real Constraint Is Governance, Not Innovation
In regulated firms, the limiting factor is rarely the quality of the underlying model. The real constraint is whether the deployment model can withstand regulatory scrutiny.
When AI initiatives fail, it is usually because the organizational infrastructure around them has not been redesigned to accommodate them. The technology may work perfectly well, but the surrounding controls, policies and accountability structures are missing.
AI without governance is a liability
This means that AI adoption in financial services is fundamentally an operating model discussion, not a technology one. Governance must be embedded from the start, with sensitive data appropriately redacted, policies actively enforced, and every action logged and auditable. Workflows that lack consistent guardrails, clear accountability and auditability cannot operate responsibly in a regulated environment, regardless of how capable the underlying model may be.
Pressure Is Building From Multiple Directions
The urgency of addressing this maturity gap is increasing. Regulatory timelines, exploding data volumes and rising operational complexity are converging simultaneously. Functions like KYC and AML, risk, finance, operations and client servicing are all operating under tighter regulatory time pressure. At the same time, structural shifts such as T+1 settlement and the growing momentum behind tokenization are accelerating the need to modernize, with automation becoming less of an efficiency gain and more of an operational necessity.
Asset managers are being asked to deliver more with fewer resources and under closer scrutiny than ever before. In that environment, a governance deficit becomes particularly costly. Firms that invest heavily in innovation without investing equally in operating model redesign and oversight infrastructure will struggle to realize meaningful value.
From Experiments to Real Workflows
The difference between experimentation and operational value becomes clear when AI is applied to real workflows.
Consider a common operational challenge in private markets: extracting data from capital account statements and investor reports across hundreds of GP portals. The underlying technology required to read documents, extract data and reconcile it across systems already exists.
The difficulty lies in building the controls around that workflow: verifying data provenance, enforcing policy checks, managing exceptions and ensuring that every decision can be audited. Without that operational framework, even technically successful AI solutions cannot be deployed safely at scale.
Three Conditions AI Agents Need to Function Safely
This governance discussion becomes even more important with the emergence of agentic AI systems. Unlike traditional automation, these systems can perform multi-step tasks, interact with multiple systems and execute workflows with a degree of autonomy.
For such systems to operate safely in financial services environments, three conditions must be in place:
- Reliable data context
Agents must operate on governed data pipelines with clear provenance and appropriate access controls. - Explicit task instructions and boundaries
Agents require clearly defined objectives, escalation paths and policy constraints that determine what actions are permitted. - Controlled system access
Interactions with internal systems must occur through permissioned interfaces with full logging and auditability.
Deploying an agent into an environment with poor data governance, undefined permissions and no audit trail is not a technology problem. It is an operational risk waiting to happen. For example, an AI agent supporting a KYC review workflow may read onboarding documents, extract key data points, cross-check them against internal records and flag discrepancies—but only if the workflow defines clear permissions, escalation paths and a full audit trail.
Closing the Gap
The firms that will close the maturity gap are those willing to treat AI as an operating model transformation rather than a technology refresh. That means fewer experiments and more disciplined deployment. Governance structures must evolve alongside AI capabilities, and organizations must be honest about which workflows are genuinely ready for automation and which still require human oversight. The asset management industry does not lack AI ambition.
What it lacks is the operational maturity required to deploy AI safely inside regulated environments.
The firms that succeed will not be those experimenting with models the fastest. They will be the ones redesigning their operating models so that AI can operate with control, accountability and measurable impact.
SS&C's position in this environment reflects a long-standing investment in building the infrastructure that makes responsible AI deployment possible. That means not just the models, but the governance frameworks, data controls and auditable workflows that allow those models to operate in regulated contexts with confidence.
Contact us to learn how SS&C can help you turn AI strategy into operational reality.
Written by Sabrina Bartha
Sales Director


