Moving Toward AI, Both Ethically and Responsibly

As AI is integrated into financial services, the industry must grapple with how to review and manage oversight.
Moving Toward AI, Both Ethically and Responsibly

As use cases for artificial intelligence continue to expand across all fields, the financial services industry is grappling with how to embrace the technological advances ethically and responsibly. However, the stakes for financial services firms may be higher than many others.

“We have an AI application for employees where they can type in benefits questions and get answers,” says Liz Davidson, CEO of Financial Finesses. “Imagine the mistakes if it says that you have a match of 8%, but it’s not right. This is a big deal; there can’t be any tolerance for wrong answers.”

Want the latest retirement plan adviser news and insights? Sign up for PLANADVISER newsletters.

In addition to simply making sure that AI hallucinations never make it into client-facing content or into scenario analyses, sources say several areas have emerged as key focuses of ethical consideration for plan advisers, including avoiding bias, maintaining transparency and prioritizing both data and security. Plan advisers and their clients who serve as fiduciaries must make sure that any tools they use—including AI—meet the fiduciary standards.

“If AI tools are not governed responsibly, then the risk to the clients’ outcomes, privacy and trust can totally outweigh the benefits of using [them],” says Erika Wilson, chief marketing officer at Sound Income Group, whose work focuses on helping advisers optimize their AI efforts.

For now, Wilson says, the best way to use AI responsibly is to make sure a human is involved with all decisions, checking AI’s work and assumptions, and taking fiduciary considerations into account.

Davidson agrees, adding that Financial Finesse’s approach uses a closed system of vetted, unbiased content that is regularly reviewed and fact-checked by humans.

“We completely screen out the broader world, because if you can’t contain that piece of it, you run into a lot of risk,” she says.

A Cross-Functional Approach

Once firms have established the framework to govern their use of AI, they should conduct training to ensure that all employees are working from the same principles when making decisions about how to use AI.

“You have to really train people that it’s a tool, and it’s a tool that’s not 100% accurate, and you are responsible for whatever is sent out, as a result of either using a large language model or some internal AI,” Davidson says. “You need to have the same level of responsibility—maybe even more—that you would have for something that you developed yourself.”

Those decisions and the pre-approved tools and processes will likely look different for various functions at each company.

“I run the marketing in my company,” Wilson says. “The way that I utilize AI tools is totally different than the way that somebody in research or operations might use them.”

Taking a truly cross-functional approach to AI is a challenge for some financial services firms.

“To do it right, you really have to thread it through everything that you do, and big, complex organizations have a hard time with that because they’re usually organized in divisions that collaborate internally but have a harder time seeing the broader organization,” says Azish Filabi, managing director of the Maguire Center for Ethics at the American College of Financial Services, which recently held an industry roundtable, Cross-Functional Operationalization of Responsible AI in Financial Services.

A Guide for CFPs

One starting place for plan advisers looking to understand AI best practices might be the work that trade organizations have shared on the subject. In February, the CFP Board published “Generative AI Ethics Guide: A Checklist for Upholding the Code and Standards,” aimed at helping CFP professionals incorporate AI into their workflow. The guide detailed the ways a certified financial planner might use generative AI, including:

  • Gathering information about a client (i.e. taking and summarizing client meeting notes, aggregating documents, etc.);
  • Conducting initial research to assess strategies in the client’s best interest;
  • Improving the clarity/comprehensibility of communications to the client and other constituents;
  • Creating/refining public-facing content; and
  • Generating ideas for building/improving a successful brand.

“Used appropriately, Generative AI can offer significant benefits, enhancing efficiency and allowing CFP professionals to dedicate more time on aspects of their business that Generative AI cannot replace, such as improving relationships and providing personalized value to their clients,” the guide stated.

It includes a 23-point checklist for CFPs to ensure they have conducted due diligence on any AI vendor, confirmed the accuracy of its output and ensured that privacy protection mechanisms exist for any sensitive client data or information.

Emphasizing Transparency

Financial professionals must also prioritize transparency when using AI, making sure that clients understand—and are comfortable with—the way advisers are employing AI tools when servicing their business. In addition, plan advisers need to be able to explain how and why AI tools made the recommendations they did.

“The biggest risk is just not doing your homework and not making sure that what you’re utilizing is secure, that the information is correct, and that you can back it up and be transparent if you use AI to generate something,” Wilson says.

AI continues to evolve quickly, so it is important for plan advisers to continuously monitor their own approach to using AI tools, updating or changing them as necessary to reflect technology advances, as well as client needs and market conditions.

“There’s this fear that people have with AI that it’s going to replace humans, but actually what I’m hearing from a lot of companies is that they need to invest in their people,” Filabi says. “They want to really get them to understand this technology in a more advanced and elevated way and evolve their culture to be able to do it right.”

More on this topic:

Regulators Urged to Take Risk-Based Approach Toward AI
AI and the Future of Participant Services
Embracing AI to More Efficiently Process Retirement Plan Documents
Bringing AI to Practice Management
Making Decisions About AI Adoption

Embracing AI to More Efficiently Process Retirement Plan Documents

Artificial intelligence seems to be everywhere, with seemingly endless applications. However, many in the retirement plan industry are still figuring out how to integrate it into their workflow.

In a conversation with PLANADVISER, ERISA attorney David Levine spoke about how he has used artificial intelligence to develop a tool for retirement plan administration and his predictions for what is likely to come in the future.

For more stories like this, sign up for the PLANADVISERdash daily newsletter.

PLANADVISER: How do you see AI being integrated into the retirement plan and DC recordkeeping and investing space?

David Levine picture

David Levine

Levine: Where AI is most prominent in the retirement plan market right now is in the area of education and advice. Not a day goes by that I don’t see a new or existing solution promoting its use of “AI.” What I find most exciting is less the “we have AI” message, because—much like prior generations of computer innovations—it is rapidly becoming a core, rather than unique, function, but how it is used for increased personalization for stakeholders in the retirement and investment industries.

PLANADVISER: What are the biggest misconceptions about AI in the retirement plan industry?

Levine: The first misconception is that AI “thinks.” It isn’t conscious. It takes skilled programmers and those with subject matter knowledge to maximize its value. The second misconception is that AI will replace everyone. It isn’t true. We’ve all become increasingly “cyborg-like” over the years. For example, when is the last time someone read a map? Most people just use a maps application on their phone. It doesn’t mean we still don’t need maps.

PLANADVISER: How is the current AI potential different from the computer-based models of the past that have been used to develop managed accounts or participant advice platforms?

Levine: Traditional computer models have been more linear in that ‘A’ leads to ‘B’ output. AI and large language model systems allow more granularity and shades of gray that can enhance personalization.

PLANADVISER: Can you tell us a little more about your product and how it leverages AI to solve a problem or inefficiency?

Levine: PlanPort is a web-based solution that is designed to address a key challenge in the retirement industry: reading, understanding and translating plan documents for a variety of purposes, including onboarding, mergers and acquisitions, and participant-level services. To read and understand a plan document can take a significant amount of time, and PlanPort helps cut the time involved to a fraction of historic manual practices so that human subject matter experts can maximize the value of their time.

Although there is a lot of talk about AI, at PlanPort, we use AI as one component of a complex, layered process that integrates detailed knowledge of how retirement plans and plan documents “work” with an AI system that provides a key step in translating these documents.

PLANADVISER: How do you train or maintain your AI systems to keep up with potential regulatory changes?

Levine: Because our system is a “universal translator” for plan documents, we focus on maintaining key plan provisions as part of our output. We do not train AI models on client documents, and they are not “the product” that could be sold or leak out between our business partners.

PLANADVISER: What areas of the retirement plan industry can expect the most change due to AI in the next two to five years?

Levine: Aside from the impact of PlanPort, which our team is confident about, education and advice are most likely to be impacted. The biggest area of regulation and restrictions is one of your prior questions—the training of models on client data. I already regularly see negotiations over the restrictions on the use of stakeholder data in training AI models, and I expect that to grow.

PLANADVISER: How will AI be used to augment or replace human work or interaction?

Levine: I see it as an augmentation tool. The debate for decades has been: Does technology replace or augment human work? AI is good in many things and will evolve, but it makes a great companion—not a replacement for human beings with their judgment, knowledge, expertise and social skills.

PLANADVISER: Are there any ethical or privacy issues you can envision in the application of AI in the retirement plan space? What should fiduciaries be aware of when considering tools that incorporate AI?

Levine: As noted above, in the privacy world, there has long been discussion about what happens to individual and corporate data when used by third parties, and legal contracting is already addressing that point. The more confidential the information, such as personally identifiable information, the more beneficial it can be for stakeholders to understand how information will be used, shared and monetized. However, courts have generally focused on privacy as a contract matter and not, despite some attempted claims, as a fiduciary breach matter, so this topic may be yet another part of the contract negotiation process.

David Levine, a lawyer by trade, is the founder of PlanPort and a principal in Groom Law Group, Chartered, the largest employee benefits law firm in the United States. No Groom or Groom client information or resources of any kind are used in PlanPort, and PlanPort does not provide legal advice of any kind.

Any opinions of the author do not necessarily reflect the stance of ISS STOXX or its affiliates.

More on this topic:

Regulators Urged to Take Risk-Based Approach Toward AI
AI and the Future of Participant Services
Moving Toward AI, Both Ethically and Responsibly
Bringing AI to Practice Management
Making Decisions About AI Adoption

«

You have reached your limit of two free articles