Never miss a story — sign up for PLANADVISER newsletters to keep up on the latest retirement plan adviser news.
Managing the AI Content Explosion in Financial Services
Advisers face major compliance risks if they can no longer oversee—or explain—their AI output.

Jamie Hoyle
Financial firms are reporting significant increases in communications volume as artificial intelligence tools become more widely adopted. After Bain & Co.’s mid-2024 survey found financial services firms seeing 20% productivity gains, broader 2025 studies showed gains reaching 40%, as tools and adoption matured.
When employees can draft content faster, they produce more of it. Meanwhile, the buffer time that used to exist between drafting, review and sending has compressed or disappeared entirely. This creates specific challenges across both surveillance and supervision functions under Financial Industry Regulatory Authority and Securities and Exchange Commission requirements.
FINRA Rule 3110 requires firms to establish procedures for reviewing correspondence and internal communications through ongoing surveillance, while also mandating supervision of marketing materials and public communications before distribution. Sampling rates that provided adequate coverage at previous volumes may no longer be sufficient as output multiplies. Similarly, compliance teams reviewing marketing materials face dramatically higher submission volumes without additional capacity.
The accuracy problem compounds this difficulty, impacting surveillance and supervision. When an adviser drafts an email manually, they (hopefully) think through every claim and figure. When AI generates content and the adviser edits it, the cognitive process is different. Subtle errors slip through more easily—performance data that sound authoritative but reflect false information; fund characteristics that were accurate six months ago; or incomplete regulatory disclosures.
The multiplication effect makes this more concerning: If an AI tool pulls an incorrect statistic into one communication, that same error can propagate across dozens of outputs. Worse, that flawed data may then feed into future AI generations, creating a cascade of related errors. A single wrong number about fund performance, once replicated across 40 client emails and then referenced in subsequent marketing materials, creates exponentially more regulatory exposure than one manually drafted error.
Why AI Must Be Explainable
The solution is not banning AI tools or trying to return to slower processes. That approach ignores market reality—competitors are using these tools, employees expect them, and the productivity gains are astronomical. What firms need are compliance frameworks that acknowledge current output levels and adapt accordingly through intelligent systems that can prioritize what truly warrants human attention, whether that is surveillance of ongoing communications or supervision of marketing content.
Explainability becomes essential in this environment. When a surveillance system flags a client communication for review or determines something is low-risk and does not flag it, a financial professional needs to be able to explain that decision to examiners. Even if the AI program makes the wrong call, a decision backed up with documented evidence and clear reasoning is a defensible surveillance process.
Black-box AI systems, which do not provide their reasoning, leave users trusting an algorithm they cannot audit or explain. Using explainable AI demonstrates a reasonable, documented process that examiners can understand and evaluate. This matters especially when users rely on AI to monitor content that AI helped create. The explainability of surveillance tools directly affects users’ ability to demonstrate adequate oversight.
As FINRA noted in its 2024 guidance on AI, existing rules apply regardless of whether firms use AI technology, meaning both surveillance and supervision systems must meet the same standards.
Compliance Scaling With AI Output
Firms getting this right are treating AI adoption as an operational change that requires process updates, not just a productivity tool. They are establishing explicit policies about AI use in client-facing content; training both staff and compliance teams on what to look out for; and implementing intelligent systems that can handle current volumes while maintaining defensible oversight.
This combination works: AI-generated content for efficiency, paired with explainable AI-powered compliance for accountability. Firms can capture the productivity benefits of AI tools while maintaining oversight standards that hold up under regulatory scrutiny. The volume increase becomes manageable because intelligent prioritization directs compliance resources where they are actually needed.
Financial services firms will continue adopting AI tools because the competitive advantages are too significant to ignore. The firms that thrive will be those that evolve their compliance frameworks to match their new operational reality, maintaining defensible oversight through systems built on explainability, not blind trust.
Jamie Hoyle is the vice president of product at MirrorWeb, where he spearheaded the development of the company’s communications surveillance platform.
This feature is to provide general information only, does not constitute legal or tax advice, and cannot be used or substituted for legal or tax advice. Any opinions of the author do not necessarily reflect the stance of ISS STOXX or its affiliates.
You Might Also Like:
Brokers Increase AI Use as Compliance Demands Grow
Doubts About AI Persist Among Advisers, May Contribute to Slow Adoption
