SEC’s Proposed AI Rule: Toward a new baseline against “business bias” and fraud, and a potential missed opportunity on cybersecurity

Cristin Flynn Goodwin, Founder and Managing Partner, and Pamela Rubio, Legal Intern, and Student, University of Washington School of Law

Fraud is fraud, and bad actors have a new tool, AI, to exploit the public. So what happens when you combine AI, finance, and the law of fraud?” SEC Chair Gary Gensler[1]

Artificial intelligence is being considered, tested, and deployed across the financial services sector, and the Securities and Exchange Commission (SEC) has taken notice. The SEC’s new proposed rule on Artificial Intelligence shows that the SEC is concerned with investment firms’ use of predictive data analytics (PDA) to optimize for their own interests (intentionally or unintentionally) ahead of investor interests. The proposed rule, Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (AI Proposed Rule), acknowledges that some conflicts of interest are inherent to the relationship between firms and investors and is attempting to curtail the potentially harmful impact of biased investing,[2] particularly in the wake of meme-stock growth and the gamification of retail investment.[3]

The Proposed Rule if enacted would require a firm to eliminate or neutralize conflicts of interest arising from its use of covered technologies in investor interactions. It’s easy to see the SEC’s concern – an unscrupulous firm or actor could prioritize investment recommendations in which the firm has an interest without the customer ever knowing, or steer individuals into high-risk investments. The SEC focuses heavily on the concept of a “conflict of interest” and how the firm addresses, minimizes, and disclose those conflicts.

The AI Proposed Rule was released on July 23, 2023 – the same date the SEC released its final Cybersecurity Rule – and its public comment period closed in October 2023. Comments on the Proposed Rule were generally concerned with the broad scope and definitions laid out by the SEC, warning the SEC of the risks of overbroad and unduly burdensome regulations and their impact on the market. As the SEC works through the filed comments and prepares for the release of the final rule, a few important considerations emerge.

First, it’s clear that the SEC is looking beyond AI to capture PDA-like technologies, include machine learning, deep learning algorithms, neural networks, neural-linguistic programming, or large language models as well as other technologies in its definition of “covered technology”. While it’s impossible to future-proof a definition, it’s clear that so far, the SEC is thinking about a broad swath of technologies that could be used offensively against investors.

Second, the proposal is intended to be broad and technology neutral as the SEC does not want to limit itself or any future rules as AI and its use cases continue to evolve. To no surprise, the expansive broadness is receiving pushback from trade groups, being referred to as having a “chilling effect on business technology.”[4] In a letter to the SEC, trade groups have stated concern that the proposal demonstrates a fundamental misunderstanding of how firms rely upon technology in a myriad of ways to benefit investors, both directly (e.g., by amplifying reporting speed and capabilities) and indirectly (e.g., by allowing investment advisers and broker-dealers to enjoy efficiencies and thereby reduce costs).[5]

It seems that the SEC fears that the technology is going to be so convincing that investors would not know that they are being “steered” or manipulated by a machine that reflects a bias or interest that favors the investment advisor. It would be akin to walking into a casino knowing all the slot machines and games are rigged to make sure the “House never loses”. The SEC’s view is that the risk of that power being used against unknowing investors is simply too great.

Third, while it’s not surprising that the SEC is struggling with how to protect investors from harm as the financial sector and the world learns how to use (and abuse) AI, it is surprising that security of AI is not considered at this time. In the Proposed Rule, the agency requested comment on all aspects of the proposed definition of conflict of interest, and asks whether “should firms be required to consider whether the data is sensitive data that could be subject to cybersecurity or privacy rules?” This suggests that the SEC will continue to watch this space, and that amendments and additional rulemaking should be anticipated as the Commissioners observe how the market adopts AI, how investors seek to leverage it, and how attackers seek to take advantage of both sides for their own gains. While the SEC may argue that attacks against AI may be covered by its recent Final Rule on cybersecurity, given the nascent state of AI in the financial markets, encouraging companies to consider how to identify external security issues or abuse against AI in financial services and where to report it may be important as the SEC begins to consider how these covered technologies can be misused against unsuspecting investors.

Only time will tell.

[1] https://www.sec.gov/news/speech/gensler-ai-021324

[2] SEC.gov | SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment AdvisersProposed Rule: Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (sec.gov)

[3] SEC proposes AI crackdown for Wall Street firms – The Washington PostWhat Are Meme Stocks, and Are They Real Investments? (investopedia.com)

[4] https://www.law360.com/articles/1711467/biz-groups-want-more-time-to-mull-sec-s-chilling-ai-rule

[5] https://assets.law360news.com/1711000/1711467/s71223-245299-541662.pdf