The PLANADVISER Interview: Tina Anstett, Senior ERISA Counsel, Smart

A senior counsel who consults on ERISA fiduciary issues and IRS and DOL audits discusses the state of artificial intelligence regulation in the retirement plan business.

The PLANADVISER Interview: Tina Anstett, Senior ERISA Counsel, Smart

Artificial Intelligence is moving ahead at a speed unanticipated just a few years ago. Alight Inc. and Voya Financial have developed AI chatbots to respond to plan participants’ queries. RiXtrema Inc. recently launched its 401kAI service to assist advisers with plan research and marketing.

It’s difficult to predict which additional areas of the retirement plan business will see AI adoption. There’s no question, however, that regulators will be watching. In recent commentary, Securities and Exchange Commission Chair Gary Gensler said that AI can create or aggravate conflicts of interest. Given the highly regulated nature of retirement plans, a key question is: How are regulators likely to deal with AI?

Tina Anstett is the Nashville, Tennessee-based senior ERISA counsel for Smart Pension Ltd., a global retirement technology provider. She has 28 years of ERISA retirement plan industry experience, having formerly served in legal and regulatory roles at Equitable, AXA and USI Consulting Group. She consults on ERISA fiduciary issues and plan governance, Internal Revenue Service and Department of Labor audits, and ongoing compliance with federal laws and regulations.

PLANADVISER: How would you characterize the state of regulation in the retirement plan industry regarding AI: Is the technology running ahead of the regulation in areas like ERISA compliance?

ANSTETT: I would characterize the state of AI regulation in the retirement plan industry as “too soon to tell.” For example, there are some reports that the SEC is expected to release conflict-of-interest-in-technology rules in October that could apply to financial professional use of AI. On the other hand, from the qualified plan/ERISA perspective, the current IRS Priority Guidance Plan and DOL Regulatory Agenda do not contain any AI references. What is clear is that this rapidly developing technology is being leveraged across the retirement plan industry to assist with business processes, investment advice and management, as well as participant servicing and compliance with existing regulations and requirements.

Plan sponsors, service providers and financial professionals remain ultimately responsible for compliance with applicable Internal Revenue Code, ERISA and financial service industry regulations, regardless of the extent to which they leverage AI and other technology for increased efficiency and scalability. This reality necessitates careful scrutiny and risk assessment on the part of any retirement plan sponsor, fiduciary or service provider before deciding whether and how to leverage AI for greater plan and/or business benefit.

PLANADVISER: Firms are using AI both externally, as well as internally, for their own operational use. Are there any regulatory concerns emerging over AI in internal business processes, including participant data tracking?

ANSTETT: Use of AI in these areas has the potential to increase efficiency. But without appropriate controls, [AI use] may compromise compliance activities that rely on accurate participant data (non-discrimination testing, reporting). Human oversight in some capacity to verify operation, data integrity, as well as protection from cyber threats, is a key consideration.

PLANADVISER: Is the use of AI in participant-facing operations likely to receive regulatory attention?

ANSTETT: One area of concern is regarding AI-generated investment allocation suggestions provided to plan participants based on limited or no human interaction. The uncertain ability to verify accuracy of the information provided, together with potential for inaccurate information contained in AI output (“hallucinations”) creates risk that may warrant regulatory attention for the protection of participants from losses due to resulting misallocation. The use of chatbots in participant enrollment and other servicing may generate incorrect or inaccurate AI output that may cause participants to lose benefits or make inappropriate decisions.

PLANADVISER: What other areas regarding AI might see regulation?

ANSTETT: Cybersecurity vulnerabilities based on client data collected by AI; accuracy of information created by tools like ChatGPT; concerns about the independence of the AI-generated advice and recommendations of advisers; and the risk of implicit and explicit biases of AI creators all create regulatory concern.

PLANADVISER: Do you believe we’ll see increased regulation of AI in the retirement plan business in the near term? If so, how might that regulation develop?

ANSTETT: Before we see any increased regulatory activities, regulators may very likely take a “wait and see” approach, possibly using increased regulatory enforcement to uncover areas that may benefit from greater regulation or guidance. To use cybersecurity as a past example, based on DOL audit activity and private litigation, cybersecurity concerns in connection with retirement plans prompted the DOL in April 2021 to issue its “Cybersecurity Best Practices” information to assist plan sponsors, plan service providers, as well as plan participants. During this time, DOL added extensive cybersecurity inquiries to its investigative process. Similar to the detailed cybersecurity due diligence plan sponsors must conduct before engaging service providers, inquiries regarding the use of AI in the provision of plan services may very well become commonplace within service provider due diligence processes.

Plan fiduciaries are the ultimate responsible parties to ensure plans are operated in accordance with statutory and regulatory requirements for the exclusive benefit of plan participants and beneficiaries. Hiring service providers is part of that fiduciary responsibility and will require fiduciaries to understand the benefits, risks and safeguards when choosing to leverage AI, as well as selecting service providers who use AI in the delivery of plan and participant services. Failure to do so may result in a breach of ERISA’s duties of prudence.

«