Never miss a story — sign up for PLANADVISER newsletters to keep up on the latest retirement plan adviser news.
Past the Bots: How to Implement AI for Efficiency
Generative artificial intelligence has surpassed its “chatbot moment” for retirement plan advisers, and industry professionals now recommend implementing tools that will enhance efficiency and save time.
“Firms see the most value when AI is embedded into their proprietary ecosystems, rather than used as a standalone chatbot,” Vickie Wicks, financial adviser and principal at Edward Jones, told PLANADVISER in an email.
The next phase is to move toward more efficient uses of AI, using programs that support adviser judgment, rather than replace it. Effective implementations include when AI is used for synthesis, scenario planning and communication, but never for final decision-making, according to Wicks.
The push for AI implementation comes not only from clients and investors, but also from advisers themselves who do not want to work at a firm that is falling behind technologically.
“If you aren’t adopting AI from a firm level, you are going to be left behind. You’re going to lose good recruiting opportunities,” says Matt Halloran, chief evangelist at Zocks Communications Inc., an AI platform for financial services. “I already had conversations with advisers who were like: “My firm has to implement this [AI] by mid-[20]26. If not, I’m going to find one who can.”
Halloran also notes that large firms, notorious for multiyear software rollouts, no longer have the luxury of moving slowly.
From Chatbot to Assistant
The next step to integrating AI requires educating employees and building tools inside the firm’s ecosystem, rather than standalone chat windows. Halloran says effective usage includes internal AI assistants trained on firm-approved content; integrations into customer relationship management systems that help with meeting preparation and follow-ups; and knowledge assistants that make institutional expertise easier to access. Among the most promising applications of advanced AI is assisting with retirement scenario modeling and risk analysis—tasks that typically require significant time and expertise.
At Edward Jones, advisers use AI as planning assistants, instead of an autonomous modeling tool, according to Wicks.
“Common uses include summarizing plan data and participant demographics; stress-testing retirement scenarios based on different contribution, return, or longevity assumptions; translating complex projections into clear, participant-friendly language; and identifying planning gaps or risks that warrant deeper human analysis,” she said.
These AI assistants ingest structured outputs from planning tools or spreadsheets and help financial advisers compare scenarios side-by-side, surface tradeoffs such as income sustainability versus market risk, and draft explanations or meeting materials tailored to different audiences. Halloran has seen similar, noting that many firms have already effectively integrated AI assistants trained on firm-approved content, or added AI to CRM systems to support meeting prep and follow-up, and created knowledge assistants that make institutional expertise easier to access.
Experts stress that AI does not replace actuarial models or fiduciary analysis but instead accelerates interpretation and communication.
Changing Perception of AI
In her conversations with advisers, Danielle Labotka, a behavioral scientist for Morningstar Research Services LLC, sees them using generative AI in simpler ways, such as note-taking and email writing.
“I believe the limited implementation of generative AI in financial planning indicates that advisers do not yet have sufficient guidance for how to use generative AI for more complex tasks,” she said.
Labotka points out that advisers can think about AI in client-sensitive ways to avoid drawbacks. She has three top recommendations for advisers:
- Use AI in a way that complements one’s capacities, rather than a substitute for expertise;
- Do not use it as a shortcut to building and maintaining relationships; and
- Ensure one’s practice has policies for using AI, to ensure it is implemented in a client-friendly way.
For a company like Zocks, part of their job in working with advisers and implementing AI tools for firms, is making firms and advisers aware of constant advancements in AI and how to make the most of proprietary tools provided by Zocks.
“We have a team of 50 engineers in Zocks. So, there’s no way an adviser’s going to be able to figure out how to prompt their ChatGPT or Claude or [Google] Gemini to be able to do what 50 full-time engineers [can] do,” says Halloran.
“When you’re looking at something on a scale to really modify your practice, you need to look at integrations, making sure that whatever AI you’re using is going to integrate with the products and tools that you’re using,” he adds. “You also really want to make sure you really dig the interface.”
Human Review Is Not Optional
As firms and advisers move to real implementation, the most important step is to always keep a human in the loop of AI processes.
Labotka’s research found that “human review” was among the most common guardrails requested by investors. That human layer matters for two reasons clients care about: it ensures accuracy and it ensures the output is tailored by someone who actually knows the client’s needs.
Clients also expect safeguards around privacy and data protection and disclosures about how an adviser is using AI, Labotka said.
Experts recommend that people pay close attention to the inputs put into the technology as well, to ensure positive outputs.
“Stop trying to subscribe to perfection. What we want is accuracy,” says Halloran.
![]() |
Why Data Security Is a ‘Shared Responsibility’ for Advisers |
![]() |
Advisers Using AI, by the Numbers |
You Might Also Like:
Nearly Half of Adults Praise ‘Superior’ AI Finance Information
Prudential Rolls Out AI, Data-Based Enhancements for Adviser Program


