Want the latest retirement plan adviser news and insights? Sign up for PLANADVISER newsletters.
Moving Toward AI, Both Ethically and Responsibly
As use cases for artificial intelligence continue to expand across all fields, the financial services industry is grappling with how to embrace the technological advances ethically and responsibly. However, the stakes for financial services firms may be higher than many others.
“We have an AI application for employees where they can type in benefits questions and get answers,” says Liz Davidson, CEO of Financial Finesses. “Imagine the mistakes if it says that you have a match of 8%, but it’s not right. This is a big deal; there can’t be any tolerance for wrong answers.”
In addition to simply making sure that AI hallucinations never make it into client-facing content or into scenario analyses, sources say several areas have emerged as key focuses of ethical consideration for plan advisers, including avoiding bias, maintaining transparency and prioritizing both data and security. Plan advisers and their clients who serve as fiduciaries must make sure that any tools they use—including AI—meet the fiduciary standards.
“If AI tools are not governed responsibly, then the risk to the clients’ outcomes, privacy and trust can totally outweigh the benefits of using [them],” says Erika Wilson, chief marketing officer at Sound Income Group, whose work focuses on helping advisers optimize their AI efforts.
For now, Wilson says, the best way to use AI responsibly is to make sure a human is involved with all decisions, checking AI’s work and assumptions, and taking fiduciary considerations into account.
Davidson agrees, adding that Financial Finesse’s approach uses a closed system of vetted, unbiased content that is regularly reviewed and fact-checked by humans.
“We completely screen out the broader world, because if you can’t contain that piece of it, you run into a lot of risk,” she says.
A Cross-Functional Approach
Once firms have established the framework to govern their use of AI, they should conduct training to ensure that all employees are working from the same principles when making decisions about how to use AI.
“You have to really train people that it’s a tool, and it’s a tool that’s not 100% accurate, and you are responsible for whatever is sent out, as a result of either using a large language model or some internal AI,” Davidson says. “You need to have the same level of responsibility—maybe even more—that you would have for something that you developed yourself.”
Those decisions and the pre-approved tools and processes will likely look different for various functions at each company.
“I run the marketing in my company,” Wilson says. “The way that I utilize AI tools is totally different than the way that somebody in research or operations might use them.”
Taking a truly cross-functional approach to AI is a challenge for some financial services firms.
“To do it right, you really have to thread it through everything that you do, and big, complex organizations have a hard time with that because they’re usually organized in divisions that collaborate internally but have a harder time seeing the broader organization,” says Azish Filabi, managing director of the Maguire Center for Ethics at the American College of Financial Services, which recently held an industry roundtable, Cross-Functional Operationalization of Responsible AI in Financial Services.
A Guide for CFPs
One starting place for plan advisers looking to understand AI best practices might be the work that trade organizations have shared on the subject. In February, the CFP Board published “Generative AI Ethics Guide: A Checklist for Upholding the Code and Standards,” aimed at helping CFP professionals incorporate AI into their workflow. The guide detailed the ways a certified financial planner might use generative AI, including:
- Gathering information about a client (i.e. taking and summarizing client meeting notes, aggregating documents, etc.);
- Conducting initial research to assess strategies in the client’s best interest;
- Improving the clarity/comprehensibility of communications to the client and other constituents;
- Creating/refining public-facing content; and
- Generating ideas for building/improving a successful brand.
“Used appropriately, Generative AI can offer significant benefits, enhancing efficiency and allowing CFP professionals to dedicate more time on aspects of their business that Generative AI cannot replace, such as improving relationships and providing personalized value to their clients,” the guide stated.
It includes a 23-point checklist for CFPs to ensure they have conducted due diligence on any AI vendor, confirmed the accuracy of its output and ensured that privacy protection mechanisms exist for any sensitive client data or information.
Emphasizing Transparency
Financial professionals must also prioritize transparency when using AI, making sure that clients understand—and are comfortable with—the way advisers are employing AI tools when servicing their business. In addition, plan advisers need to be able to explain how and why AI tools made the recommendations they did.
“The biggest risk is just not doing your homework and not making sure that what you’re utilizing is secure, that the information is correct, and that you can back it up and be transparent if you use AI to generate something,” Wilson says.
AI continues to evolve quickly, so it is important for plan advisers to continuously monitor their own approach to using AI tools, updating or changing them as necessary to reflect technology advances, as well as client needs and market conditions.
“There’s this fear that people have with AI that it’s going to replace humans, but actually what I’m hearing from a lot of companies is that they need to invest in their people,” Filabi says. “They want to really get them to understand this technology in a more advanced and elevated way and evolve their culture to be able to do it right.”
You Might Also Like:

Most Americans Still Skip Basic Cyber Protections

C-Suite Prioritizes a Shift to Proactive Management
