Gensler: SEC Will Continue Enforcement Against Crypto Companies


Securities and Exchange Commission Chair Gary Gensler testified that cryptocurrencies are a “highly speculative asset class” and that the crypto industry as a whole is “rife with fraud and scams and hucksters” at a hearing hosted by the Financial Services and General Government Subcommittee of the Senate Committee on Appropriations on Wednesday.

In his testimony, Gensler said, “The whole crypto field is built on models we wouldn’t allow in traditional securities markets,” and, “The whole field has hurt more Americans than it should.”

Never miss a story — sign up for PLANADVISER newsletters to keep up on the latest retirement plan adviser news.

Senator Dick Durbin, D-Illinois, asked Gensler if it is true that one in five Americans invest in crypto. Gensler responded by saying that the SEC has not surveyed this issue, but that he was aware of other surveys to that effect. When Durbin asked if Gensler was aware that about $9 billion was lost in cryptocurrency scams last year, Gensler answered that it “could be larger, sir.”

After Durbin remarked that crypto assets have “no underlying value,” Gensler responded that the real issue for the SEC is that investors “are not getting the proper disclosure to make their investment choices” and that crypto firms “are bundling and comingling services that we would never allow.”

Senator Bill Hagerty, R-Tennessee, expressed concern that the SEC’s approach, of regulation through enforcement, might be less preferable to rulemaking, in part because enforcement might push cryptocurrency companies, existing and future, to other countries.

“I would strongly encourage, rather than regulating via enforcement, to think through the ruleset that would create clarity here in the marketplace,” Hagerty said, repeating a common criticism of the SEC’s approach.

Gensler responded by saying that many crypto firms are “built on a business model of catch-us-if-you-can” and “a model of preying upon the investing public’s desire for a better life and future.” In prior remarks, the SEC chair has normally argued that new rules are not necessary to enforce rules already on the books that are intended to prevent fraud and comingling functions.

Gensler has repeatedly made public remarks about the importance of bringing the cryptocurrency industry into compliance, but he was particularly forceful in Wednesday’s testimony and has not previously used terms such as “hucksters” to describe cryptocurrency dealers.

Gensler’s remarks came in the same week the SEC announced it has accepted six applications for review of Bitcoin-based ETFs from managers including BlackRock Inc., Fidelity Investments and VanEck. Gensler acknowledged during the hearing that Bitcoin is most likely not a security, unlike nearly every other cryptocurrency, because it is not an investment contract.

The PLANADVISER Interview: Tina Anstett, Senior ERISA Counsel, Smart

A senior counsel who consults on ERISA fiduciary issues and IRS and DOL audits discusses the state of artificial intelligence regulation in the retirement plan business.

The PLANADVISER Interview: Tina Anstett, Senior ERISA Counsel, Smart

Artificial Intelligence is moving ahead at a speed unanticipated just a few years ago. Alight Inc. and Voya Financial have developed AI chatbots to respond to plan participants’ queries. RiXtrema Inc. recently launched its 401kAI service to assist advisers with plan research and marketing.

It’s difficult to predict which additional areas of the retirement plan business will see AI adoption. There’s no question, however, that regulators will be watching. In recent commentary, Securities and Exchange Commission Chair Gary Gensler said that AI can create or aggravate conflicts of interest. Given the highly regulated nature of retirement plans, a key question is: How are regulators likely to deal with AI?

Never miss a story — sign up for PLANADVISER newsletters to keep up on the latest retirement plan adviser news.

Tina Anstett is the Nashville, Tennessee-based senior ERISA counsel for Smart Pension Ltd., a global retirement technology provider. She has 28 years of ERISA retirement plan industry experience, having formerly served in legal and regulatory roles at Equitable, AXA and USI Consulting Group. She consults on ERISA fiduciary issues and plan governance, Internal Revenue Service and Department of Labor audits, and ongoing compliance with federal laws and regulations.

PLANADVISER: How would you characterize the state of regulation in the retirement plan industry regarding AI: Is the technology running ahead of the regulation in areas like ERISA compliance?

ANSTETT: I would characterize the state of AI regulation in the retirement plan industry as “too soon to tell.” For example, there are some reports that the SEC is expected to release conflict-of-interest-in-technology rules in October that could apply to financial professional use of AI. On the other hand, from the qualified plan/ERISA perspective, the current IRS Priority Guidance Plan and DOL Regulatory Agenda do not contain any AI references. What is clear is that this rapidly developing technology is being leveraged across the retirement plan industry to assist with business processes, investment advice and management, as well as participant servicing and compliance with existing regulations and requirements.

Plan sponsors, service providers and financial professionals remain ultimately responsible for compliance with applicable Internal Revenue Code, ERISA and financial service industry regulations, regardless of the extent to which they leverage AI and other technology for increased efficiency and scalability. This reality necessitates careful scrutiny and risk assessment on the part of any retirement plan sponsor, fiduciary or service provider before deciding whether and how to leverage AI for greater plan and/or business benefit.

PLANADVISER: Firms are using AI both externally, as well as internally, for their own operational use. Are there any regulatory concerns emerging over AI in internal business processes, including participant data tracking?

ANSTETT: Use of AI in these areas has the potential to increase efficiency. But without appropriate controls, [AI use] may compromise compliance activities that rely on accurate participant data (non-discrimination testing, reporting). Human oversight in some capacity to verify operation, data integrity, as well as protection from cyber threats, is a key consideration.

PLANADVISER: Is the use of AI in participant-facing operations likely to receive regulatory attention?

ANSTETT: One area of concern is regarding AI-generated investment allocation suggestions provided to plan participants based on limited or no human interaction. The uncertain ability to verify accuracy of the information provided, together with potential for inaccurate information contained in AI output (“hallucinations”) creates risk that may warrant regulatory attention for the protection of participants from losses due to resulting misallocation. The use of chatbots in participant enrollment and other servicing may generate incorrect or inaccurate AI output that may cause participants to lose benefits or make inappropriate decisions.

PLANADVISER: What other areas regarding AI might see regulation?

ANSTETT: Cybersecurity vulnerabilities based on client data collected by AI; accuracy of information created by tools like ChatGPT; concerns about the independence of the AI-generated advice and recommendations of advisers; and the risk of implicit and explicit biases of AI creators all create regulatory concern.

PLANADVISER: Do you believe we’ll see increased regulation of AI in the retirement plan business in the near term? If so, how might that regulation develop?

ANSTETT: Before we see any increased regulatory activities, regulators may very likely take a “wait and see” approach, possibly using increased regulatory enforcement to uncover areas that may benefit from greater regulation or guidance. To use cybersecurity as a past example, based on DOL audit activity and private litigation, cybersecurity concerns in connection with retirement plans prompted the DOL in April 2021 to issue its “Cybersecurity Best Practices” information to assist plan sponsors, plan service providers, as well as plan participants. During this time, DOL added extensive cybersecurity inquiries to its investigative process. Similar to the detailed cybersecurity due diligence plan sponsors must conduct before engaging service providers, inquiries regarding the use of AI in the provision of plan services may very well become commonplace within service provider due diligence processes.

Plan fiduciaries are the ultimate responsible parties to ensure plans are operated in accordance with statutory and regulatory requirements for the exclusive benefit of plan participants and beneficiaries. Hiring service providers is part of that fiduciary responsibility and will require fiduciaries to understand the benefits, risks and safeguards when choosing to leverage AI, as well as selecting service providers who use AI in the delivery of plan and participant services. Failure to do so may result in a breach of ERISA’s duties of prudence.

«