AI: From 'in the Loop' to in Control
As artificial intelligence use becomes more prevalent in financial advising, advisers may have a paradoxical reluctance to discuss it. ISS Market Intelligence—owned by PLANADVISER’s parent company, ISS STOXX—found last year that nearly half of surveyed advisers reporting using AI tools, but an Advisor360 survey from last fall found that 79% of advisers said they would not initiate a conversation with clients about their use of AI. That included 30% who said they would never mention their AI use.
The Advisor360 survey authors deduced that some advisers may not discuss AI use since they view such tools as “infrastructure—essential, but invisible.” Other advisers could sense an ethical quandary, as the Adviser360 analysis stated that being open about incorporating AI could “spark questions about accuracy, privacy or the human touch.”
Even AI experts are aware of the technology’s shortcomings. As a recent personal experiment, Edwin Jongsma, vice president of AI at Financial Finesse, says he—out of curiosity, not on company business—uploaded an anonymized bank statement to an AI platform and received incorrect answers to his queries.
“I said, ‘Tell me about my home insurance.’ It blatantly lied. It just came up with something that was not in that statement,” Jongsma says. “[AI is] fantastic, but it’s dangerous.”
For Leo Rydzewski, general counsel of the Certified Financial Planner Board of Standards Inc., honesty is always the best policy when it comes to sharing one’s use of AI with client accounts and services. Rydzewski says the benefits of using AI outweigh the negatives, and clients may even be enthused to work with an adviser familiar with technology.
“I think, ultimately, that type of transparency builds trust in the client relationship. Some clients may celebrate the fact that you’re using AI,” Rydzewski says. “There [are] a lot of ways that an adviser can use AI to help better their practice and benefit the clients by freeing up time.”
Handy Guidance
To encourage proper, ethics-based use of AI among financial professionals, the CFP Board published a “Generative AI Ethics Guide,” an eight-page pamphlet that boiled down the complex topic to easy-to-understand principles and checklists of best practices. According to the board, certified financial planners have four primary standards to maintain when using artificial intelligence:
- Act with integrity and do not make untrue or misleading statements;
- Provide accurate information with clients;
- Keep nonpublic, personal information about clients confidential; and
- Comply with all laws and regulations and the lawful objectives of one’s firm.
The guide also stated that financial professionals need to critically evaluate any AI output for accuracy, completeness of answer, lack of omitted information, and possible copyright infringement. Users also need to check for any material conflicts of interest raised by automated recommendations, as well as inappropriate bias, like output “skewed toward a particular investor demographic that does not match the needs of the client.”
“A lot of professionals are in the business of providing financial planning—not necessarily the business of knowing technology,” Rydzewski says. “[This] is designed to be a document that CFP professionals could look to, to feel comfortable that what they’re doing is satisfying their ethical obligations.”
It has been commonplace in the advisory field to lack official AI policies. The ISS MI study of advisers, conducted in June 2025, found that while registered investment advisers were more likely to use AI tools (56% of responding RIAs, compared with 48% of total respondents), 78% of surveyed RIAs said their firms did not have written policies about the use of AI, compared with 35% of surveyed regional, independent and bank advisers.
At a webinar on AI in finance held last month by software company Smarsh Inc., Julia UIloa, founder and principal in the regulatory consulting firm JU Regulation LLC, recommended that financial professionals—regardless of whether they do or do not have official company guidance—should diligently document how they use AI and the rationale behind their decisions.
“Document your reason for it—how are you testing and monitoring the AI tool? … Make sure that the answers aren’t changing, that they’re consistent,” Ulloa said. “[As a result,] when you’re meeting with a regulatory body, you have a reason to justify why you took a specific approach.”
Keeping Humans in the Loop
Frequent advice for AI users is to “keep a human in the loop,” but human involvement begins way before reviewing output. A crucial first step is determining how AI programs are trained.
When Financial Finesse, a financial wellness program provider, developed “Aimee,” its proprietary, AI-powered, virtual financial coach, the program was fed the company’s thousands of previously published articles to ensure answers were based on vetted, dependable financial advice.
“We needed [to] tame the AI, to use the flexibility of conversation, but make it so that it’s based on our knowledge … and our knowledge was already captured in the articles that our coaches had been writing,” Jongsma says. “At least we know the answers are from us.”
As developers reviewed Aimee’s outputs, they identified gaps in Financial Finesse’s coverage. Human experts wrote articles to fill in the knowledge gaps and further educate AI responses, in what Jongsma calls a “very powerful cycle of humans and AI working together.”
Even well-trained AI has the potential to hallucinate, or present false or misleading information. Jongsma says human perspective is necessary, not just to verify AI output, but to recognize when the program is being asked something “slightly on the fringes or on the outside of the knowledge of the question.”
“AI feels compelled to answer anyways, with confidence,” Jongsma says. “[When] it doesn’t have that information, it interprets other information to be that information. So often, that’s how you see hallucinations occurring: … when AI comes up with something that is just slightly out of context.”
Addressing Concerns
Jongsma says staffers at his company openly discussed the possibility of AI replacing human jobs and reflected on how this technology could expand human creativity, rather than simply replace it.
“We have a culture where people realize that AI is happening and … they have a role in that, because we have the ‘human in the loop,’” Jongsma says. “They are capable, with AI, to be more productive and, therefore, we can help more people change [their] financial lives.”
Ethical concerns are not a one-and-done conversation. Given the speedy progression of AI, Ulloa said workplaces need to continuously review AI’s capabilities and the ways in which employees use the technology.
“Talk to your employees. … There may be cases where you’re not even sure how your employees are using it,” Ulloa said. “Training is No. 1. The idea of doing your compliance training once a year—that’s out the door now, because the technology and its uses are changing so rapidly.”