AI and the Future of Participant Services

Use cases for both generative and agentic artificial intelligence continue to expand, especially in areas like participant education and personalized experiences.

Plan sponsors and plan advisers are excited about the prospect that rapid advancement in artificial intelligence technology will eventually help them democratize custom and personalized services—providing scale and reach at a lower cost.

However, the pace at which such technology is advancing has many plan advisers still in discussions with their plan sponsors about the best way to leverage AI tools, which are evolving from generative—creating content from prompts such as text, image or code—to agentic models—executing tasks and making decisions inside a set environment.

Never miss a story — sign up for PLANADVISER newsletters to keep up on the latest retirement plan adviser news.

“It is starting to almost dominate the conversation in every interaction that I have with our plan sponsors, advisers, consultants and even participants,” says Randy Blaha, vice president of technology for Nationwide’s retirement solutions business. “This is an active dialogue happening across the space.”

For defined contribution plans, a generative AI tool can create content or power a chatbot that answers straightforward participant questions. Agentic AI, on the other hand, could potentially suggest best next steps for participants and then execute transactions on their behalf, working across multiple applications to solve problems in a more interactive way and on a more timely basis.

While plan advisers and other providers may be testing or building programs internally, plan sponsors do not yet have access to products in the marketplace that actually make use of agentic AI.

“We think that over time, there’s a really strong, compelling value proposition,” says Dennis Elliott, head of product and platforms at T. Rowe Price. “But it’s hard to see it just yet, because everyone is still building right now.”

Advisers Showing More Interest Than Sponsors

Plan advisers appear more willing to try out the new technology than plan sponsors. A recent Morningstar survey found that only 5% of sponsors were actively using artificial technology, but roughly half of those not using it would consider doing so. However, 35% of advisers surveyed in Orion’s Advisor Wealthtech Survey said they were already using AI and machine-powered services tools and services to better assist their clients.

“We’re starting to see some experimentation, more in the adviser space, especially at firms that provide a broader range of technology services,” Blaha says. “Those that don’t have those capabilities are really starting to put AI and technology advancements in their own development plans.”

Meanwhile, use cases for generative AI continue to expand, including for things like participant education and more personalized chatbots. AI agents are also increasingly able to help deliver a better experience to plan participants, Blaha says.

“We’re focusing those digital workers in areas that take our associates away from being available for participants and looking at how they can truly make an impact on the customer experience side,” he adds.

Prioritizing Data Security

As plan sponsors become more interested in such products, however, plan advisers are focused on ensuring that best-in-class data security and privacy protections remain in place.

“Particularly when you are starting to pilot a new technology or testing new things and bringing new vendors into the mix, there needs to be a pretty strong vetting process in place,” Elliott says. “That’s probably the biggest risk out there: that if you’re using participant data to create a more personalized experience, you have the right protections in place.”

Providers are also adjusting their offerings to meet the demands of individual clients, which can vary significantly.

“We have companies that want us to build them a generative AI chatbot, and we have other companies that want us to turn off AI for participants,” says Melissa Nysewander, head of Fidelity Investments’ workplace investing artificial intelligence center of excellence . “We have to deal with both angles, and we work regularly with our product team to make sure that we are delivering what clients want and need.”

Educating Participants

As new products come to market, plan advisers will likely need to work with clients to create plans aimed at explaining new features to participants and getting htem comfortable with technology that may feel very foreign to them. David Blanchett, portfolio manager and head of retirement research at PGIM DC Solutions, likens the revolution to online, do-it-yourself tax programs.

“If you were to tell people 30 years ago that tens of millions of people would file their taxes online using TurboTax, they’d think you were crazy,” he says. “They’d say you need an accountant. We’re creating different financial ecosystems and different ways to get advice and guidance.”

In the long term, agentic AI has the potential to reduce friction and make both personalized financial planning and complex transactions easier for plan participants to complete. For example, multiple AI agents working together could ultimately recognize the need for a 401(k) rollover and complete the transaction on behalf of a participant.

“We’re moving to a future where there’s true autonomous decisions and actions being made through AI for individuals in all different ways,” says Amy Chou, chief operating officer of the financial wellness platform Addition Wealth.

More on this topic:

Regulators Urged to Take Risk-Based Approach Toward AI
Embracing AI to More Efficiently Process Retirement Plan Documents

Regulators Urged to Take Risk-Based Approach Toward AI

A recent SEC-hosted roundtable addressing artificial intelligence highlighted the risks and rewards of its usage in financial services.

Artificial intelligence is transforming nearly every corner of the financial industry, but regulators and industry leaders remain divided over how to define, govern and deploy the technology responsibly. 

Of the more than 500 firms polled in a Broadridge Financial Services survey earlier this month, 86% reported planning to increase AI investments over the next two years.  

Never miss a story — sign up for PLANADVISER newsletters to keep up on the latest retirement plan adviser news.

Regulators have struggled to keep pace with the growing use of AI and the increasing investments in the technology made by financial firms. 

At a Securities and Exchange Commission roundtable on AI in financial services in March, panelists from major financial institutions, academia and technology firms warned that the speed of innovation is outpacing traditional regulatory frameworks. The sessions focused on both the risks and rewards of AI, as well as best practices for oversight and investor protection. 

Proposed Rule Awaits Revisions 

In July 2023, the SEC proposed a rule, commonly called the “predictive data analytics” proposal, that would require investment advisers and broker/dealers to “eliminate or neutralize” conflicts of interest arising from the use of technologies like AI in investor interactions.  

Following robust industry backlash, the agency agreed to revise the proposal, according to its July 2024 regulatory agenda, but those revisions have yet to be made. 

At the March roundtable, Commissioners Mark Uyeda and Hester Peirce, both Republicans, took aim at the original proposal during their opening remarks. 

Uyeda, who was acting SEC chair from January 20 until the April 21 confirmation of SEC Chair Paul Atkins, said he has “been concerned with some recent commission efforts that might effectively place unnecessary barriers on the use of new technology.” Peirce argued the agency fell victim to the commotion surrounding AI “when [the SEC] attempted to broadly and clumsily regulate the use of predictive data analytics by broker/ dealers and investment advisers.” 

Commissioner Caroline Crenshaw, the lone Democratic commissioner until a vacant spot is filled, did not criticize the proposal directly but acknowledged that many felt it was inappropriate.  

Several panelists questioned whether regulators should even attempt to formally define AI. Gregg Berman, managing director of market analytics and regulatory structure at Citadel Securities, compared the situation to the rise of high-frequency trading, which he said ended up working fine without a concrete definition.  

“The question in my mind is not what the definition of AI is, but does it matter?” he said. 

Others, however, including Daniel Pateiro, a managing director for strategic initiatives and artificial intelligence in the office of BlackRock’s chief operating officer, said a common taxonomy could aid transparency and regulation, if it allows for flexibility.  

A definition “could be helpful in assigning and defining clear principles to help guide us forward,” he said. “I would suggest that we think about making sure that such definitions have sufficient flexibility such that we can adapt and evolve to changing capabilities that are moving at a rapid pace within this space.” 

State of Regulation 

Despite the evolution of AI, the U.S. lacks a comprehensive federal regulatory framework to govern its use, unlike the European Union, which passed the EU AI Act in July 2024. Some federal regulatory agencies, such as the Federal Trade Commission, have taken initiatives to safeguard consumers from “deceptive practices in AI applications.” 

Meanwhile, in California, the California Consumer Privacy Act set guidelines for data handling, affecting AI systems that rely on consumer data. 

The administration of President Donald Trump has pushed for a more relaxed regulatory environment, including the governance of AI. In January, Trump issued an executive order urging agencies to “remove regulatory barriers” to AI innovation and to file an inter-agency AI action plan by July2025. 

In April, the administration released two revised policies on federal agencies’ use of AI, policies modelled after Trump’s executive order. 

Internal Compliance 

Though federal regulation does not yet exist, representatives from several firms who spoke during the SEC roundtable said they have developed internal frameworks to monitor AI risks and ensure responsible deployment.  

Jeff McMillan, head of firmwide AI at Morgan Stanley, said the company uses a tiered, risk-based approach to classify use cases.  

“It’s incredibly easy to build [with generative AI] and very challenging to deploy responsibly,” he said. 

Johnna Powell, head of AI governance at the Depository Trust Co., said her company was an early adopter of an AI policy.  

“It’s really important to make sure that you have oversight of the entire life cycle of the AI technology from development to deployment,” Powell said. 

At Vanguard, Ryan Swan, its chief data analytics officer, said the firm created an “AI Academy” to build internal literacy and tailor its governance based on data sensitivity.  

Hilary Allen, a professor at American University’s Washington College of Law, encouraged the agency to hire more technologists and noted that a significant challenge of AI is admitting that the technology can sometimes be wrong. 

“We tend to think anything spit out of a computer is better than what we come up with ourselves,” she said. “It takes a lot to be able to say, ‘No, the machine is wrong.’” 

Nearly all participants agreed that a principles-based, risk-focused regulatory framework—along with clear communication and ongoing education—will be essential as AI becomes further embedded in the financial system. 

Firms Using AI Focus on Internal Operations 

Although regulatory oversight of AI remains light for the moment, in a sense, so has its usage. Nearly every firm representative at the SEC roundtable said the best usage of AI has come by making their internal operations more efficient.  

Douglas Hamilton, head of AI engineering and research at Nasdaq, said the exchange has deployed AI for internal productivity, index creation and institutional trading execution. At BlackRock, AI tools are used to optimize trading strategies and streamline operations such as reconciliations and corporate actions.  

“This AI moment is actually bringing to light not only AI solutions, but also non-AI solutions that, when strung together, are achieving greater results for our teams and, ultimately, our firms and our clients,” BlackRock’s Pateiro said. 

Speakers also said their companies evaluate the return on investment in AI through a combination of efficiency, alpha generation, revenue growth and risk reduction.  

Pateiro said BlackRock is using AI for algorithmic pricing and “as an investment process augmentation tool,” which helps to achieve “optimal trading strategies, which assist with achieving best execution, as well as reducing transaction costs.” 

However, not all results are positive. Allen, the American University professor, highlighted survey data showing that some tasks take longer with AI due to hallucinated or inaccurate outputs.  

“I don’t want to overstate the productivity gains from these Gen AI tools,” she said. “But sometimes they’re actually more time-consuming than just doing it yourself the first time.” 

More on this topic:

AI and the Future of Participant Services
Embracing AI to More Efficiently Process Retirement Plan Documents

«