Keys to Guarding Retirement Plan Data Against Human Error

Mistakes managing employer-sponsored plan data often expose vulnerabilities that can be exploited by bad actors.

As the digital age evolves, so too do the risks that threaten the security of employer-sponsored retirement plans and their data. Human error within organizations poses a significant risk, as hackers are adept at taking advantage of these vulnerabilities. Understanding and mitigating these risks is therefore crucial for plan sponsors, recordkeepers and participants alike.

Frank Bitzer, national director of ERISA consulting at Marsh McLennan Agency, says that in the current economic climate, with discussions of recession in and out of the news, many companies are starting to cut back on their security budgets.

For more stories like this, sign up for the PLANADVISERdash daily newsletter.

“They’re trying to wash their nickels and dimes,” he says. “Unfortunately, sometimes they will cut back expenses in IT, in security, software services and monitoring services. That’s where you will see human errors.”

Errors by service providers are not uncommon, and the quality of these providers can vary significantly, according to Bitzer. While some employees excel in their roles, others fall short, potentially failing to deliver on their promises to meet recommended security standards. This inconsistency can lead to vulnerabilities, as individuals may misunderstand their tasks, neglect to ask critical questions or overlook warning signs.

Chris Bellomo, EY’s Americas retirement income leader, notes that every individual handling retirement plan data introduces a potential risk.

“Human error situations often arise when there are lapses in controls during manual processes, insufficient identity access management … a lack of awareness about data risks, and widespread, unmanageable data distribution across the enterprise,” he says.

Bellomo urges plan advisers to be proactive in helping their plan sponsor clients prevent human error-related security incidents. This requires the design and establishment of a robust data governance and controls program with clearly defined access controls across all processes and reports.

Plan fiduciaries should also consider investing more in automating tasks that use personal data and make sure all data is encrypted from start to finish for better security. Additionally, training and awareness about appropriate use of data continue to be critical.

Bad Actors

“We’ve seen cases of human error where money has been lost,” Bitzer says. “It’s been sent to the wrong account because somebody fat-fingered the account numbers or the transfer numbers in the system, but those can usually be resolved.”

Financial institutions can trace the error, contact the receiving party and work to correct it. Since these errors can happen to any institution, there is generally a spirit of cooperation when it comes to resolving them. Additionally, sometimes it is a member of the financial institution that made the initial mistake.

“Human error, in and of itself, usually isn’t going to cause somebody to steal your money,” Bitzer says. “That involves an intent by an acting person, and that’s your hacker on the other end. […] These hackers are very, very good. They know how to spot these vulnerabilities. Once they spot them, they will exploit them.”

He recounts a case from late 2023 in which a woman discovered that $800,000 had been drained from her individual retirement account. Arguing with her bank and trustee, she delayed contacting the FBI or IRS. By the time her service providers convinced her to involve the authorities, the money was already gone, and they have yet to be able to recover it.

Bitzer says part of the issue stemmed from human error, such as the woman’s lax maintenance of her online security and passwords. Additionally, the financial institution that held her IRA experienced a breach in its firewalls. Litigation is likely, if not already, underway. The critical mistake, however, was the delay in involving the FBI and recovering the funds.

“FBI first, finger-pointing second,” Bitzer says.

Increased Threat

Abhishek Madhok, EY Americas’ insurance cybersecurity leader, notes that “exposure of personal identifiable information data to bad actors presents a heightened risk to the near- and in-retirement participants, as elder fraud has increased significantly due to larger average account balances.”

Fraudsters are now leveraging artificial intelligence to conduct more sophisticated fraud attempts, using PII data that has been leaked or sold on the dark web, Madhok says. This technological advancement in cybercrime underscores the need for increased vigilance.

Jay Gepfert, a founding partner in advisory consultant Culpepper RFP, says more than 65% of data breaches stem from individuals unknowingly falling prey to hacking schemes. Once information is provided to hackers, they are able to gain greater access to the individual’s information or possibly that of the sponsor or service provider.

Since most hacking issues arise with recordkeepers and third-party administrators, as they hold sensitive personal information, a data breach into an individual account could create the opportunity for an unauthorized distribution, according to Gepfert.

Gepfert says many recordkeepers have guaranteed data security, but those promises are based on participants logging into their account on a frequent basis and changing their login information. If the frequency is not kept up, the guarantee is not valid. 

“Plan sponsors, like any organization, have to remain diligent to educate and train the employees and participants on the common methods that bad actors are trying to gain access to retirement accounts and other personal accounts,” Gepfert says. “A piece of the DOL cyber guidance centers around the ongoing education to participants to keep them abreast of the things they can do to protect themselves.”

AI Is Here. Fiduciaries Must Remain Diligent

Experts discuss how the benefits of artificial intelligence for 401(k) plan administration and management also come with risks to be questioned and considered.

Artificial intelligence is playing an increasingly important role in employer-sponsored retirement plans, used by everyone from asset managers to recordkeepers to financial wellness providers. But with evolution also comes risks, from bad inputs to cybersecurity concerns.

When operating under the Employee Retirement Income Security Act, it is important that the same processes and evaluations are in place as they would be for other plan design and investment decisions, according to Michael Abbott, a partner in Foley & Lardner LLP who works with ERISA plan fiduciary clients.

Never miss a story — sign up for PLANADVISER newsletters to keep up on the latest retirement plan adviser news.

“We are still in an environment where going through the procedural prudence and process matters,” Abbott says. “Just relying on an AI-generated output is probably not going to get you where you need to be in terms of satisfying ERISA requirements.”

In a post concerning the use of AI and 401(k) fiduciary and investment committees, Abbott and colleague Aaron Tantleff, also a partner in Foley & Lardner, laid out a variety of ways AI is being used in financial services.

Those include personalizing messages to plan participants and prospective customers (Vanguard’s use of Persado); assisting financial advisers (Morgan Stanley’s Debrief); and automating investing with digital robo-advisors (Charles Schwab’s Intelligent Portfolios).

Tantleff, who focuses specifically on AI implementation in the financial sector, says it is important to know what data and information are being used by AI, allowing one to account for any bias or errors in the materials it is producing.

“Are we using training data, validation data? What am I putting in here, and what is the purpose of it?” he asks. “I, as a human, can create a selection bias in terms of what is being put into the AI. … That is always a risk, so there must be controls to it.”

Tantleff notes that, unlike with an algorithm created to run a process, AI can go off in many different directions, producing results that can be hard to track back to the source. The inputs, then, must be well-understood, and checks and balances must exist on the results of AI-produced or AI-backed material. He also says that systems using AI may be coming from a third-party provider; in those cases, it is important to ask questions that will get them into the conversation and detail their process.

To that end, Abbott and Tantleff’s blog post lists 14 different questions a plan committee can ask about the use of AI. Those questions range from how much AI is being used for 3(38) investment decisions to whether a recordkeeper gives the option for a company or participant to opt out of an AI-driven offering.

Vast Amounts of Data

It also includes a section warning against the cybersecurity risk that AI can introduce, noting: “Vast amounts of sensitive participant data fuel these systems, making them prime targets for malicious actors seeking to exploit weaknesses in security protocols. A data breach or cyberattack could not only compromise the integrity of the retirement plan but also expose fiduciaries to legal and regulatory repercussions.”

Lisa Crossley, executive director and CEO of the National Society of Compliance Professionals, agrees that AI use for investing and financial services has to be as rigorously vetted as any other priority process.

“What happens if investment recommendations based on AI rely on factors that are incorrect?” she asks. “It has to have its own governance structure, its own compliance, its own risk assessment.”

In an annual cybersecurity benchmarking survey, the NSCP and the ACA Group collected responses from asset managers, investment advisers and private markets. Crossley says they were interested to find that 38% of respondents do not yet identify AI as a cybersecurity risk, but a larger amount (49%) are considering using AI to combat cybersecurity concerns.

She notes that the organization is delving further into the topic of AI use and concerns among its audience and will be discussing initial results in October at NSCP’s national conference. For people and organizations interested in compliance-related issues, considering AI’s uses and the risks that emerge from them will clearly, per Crossley, be a burgeoning area of study.

For now, she says, the society representing compliance professionals is advocating for humans to backstop AI-driven processes and procedures, with policies and procedures similar to combatting cybersecurity concerns themselves.

“You have to have the same governance structures and protocols that you do for cybersecurity,” she says. “You can’t just trust the AI.”

Avoiding Biases

Foley & Lardner’s Abbott notes that, when operating as a plan fiduciary, one must be especially diligent in ensuring that an AI process is not introducing bias, not only to protect the plan and participants, but to stay protected against potential lawsuits.

“I’m concerned about this in the ERISA space,” he says. “We have active plaintiffs’ bar [lawyers] who are looking for weak spots. … They may say, ‘How could you totally rely on [an AI process] for an outcome that you were a fiduciary on?’ We can’t just put a rubber stamp on it.”

As with other processes of plan design and management, understanding the starting point and documenting the steps along the way is the best form of protection, he says.

Some of the other questions for committees recommended by Foley & Lardner are:

  • What is the risk of misinformation or a biased output?
  • Who is liable if the AI’s advice leads to poor investment decisions?
  • How does one evaluate the quality and accuracy of the content it produces? If the AI generates investment advice or market analysis for a 401(k) plan, how do fiduciaries ensure the information is reliable and compliant with regulations?
  • Should the committee seek independent professional advice regarding what AI can provide as a resource to satisfy fiduciary obligations under ERISA?

“If I’m on a committee and I’m a plan fiduciary, I need to be asking these professionals that I’m working: ‘How is AI figuring into what you are telling me?’” Abbott says. “I need to know how you came to do what you did and what went into it.”

Correction: This story adjusted a quote for accuracy.

«