By Mark S. Nelson, J.D.
Speaking at JAF Communications, Inc.’s nascent news venture The Messenger's livestreamed event “AI: Balancing Innovation & Regulation,” SEC Chair Gary Gensler took questions from moderator Dawn Kopecki, Deputy Business & Finance Editor at The Messenger, and then from the audience during a brief Q&A session. The focus of the discussion was what the SEC may be able to do to reduce the impact of bias and fraud in the marketplace that could arise as artificial intelligence (AI) tools become more available to the general public and ownership of the models driving these tools becomes more concentrated in only a few hands.
Moderator Kopecki opened by recalling a CNN headline that offered a laundry list of things that can go wrong with AI, such as AI outputs being racist, sexist, creepy, and the potential that AI could one day become self-aware, or otherwise lead to a financial crisis. Gensler replied that he sees AI risks along two planes: he said there are micro level risks such as bias and conflicts of interest (which an SEC proposal on predictive analytics seeks to address) as well as macro level risks related to the potential concentration of the ownership of AI models.
Gensler explained that with respect to macro level risks, the problem is that a natural economic progression can lead to the evolution of monocultures in which AI data sets or base models that traders and underwriters rely on are held by only a few companies (Gensler cited the FICO score as an example of a financial monoculture). The result could be that a hard-to-explain AI model could become problematic. Moreover, Gensler suggested that a herding effect potentially could lead markets over an inadvertent economic cliff. Gensler further explained that in the context of networked economies, there is a tendency for concentration to develop and that something akin to the concentration of cloud storage services in a few large companies could happen in the AI setting.
Kopecki then asked who prosecutes wrongdoing if AI goes awry? Gensler said that AI, as we know it, still has humans in the loop who set the hyperparameters (according to a Google Cloud document, “Hyperparameters contain the data that govern the training process itself” and can fine tune a model’s predictive accuracy). The immediately preceding question from Kopecki had asked if regulation can head off the bad effects of AI. Gensler responded that he was unsure if we can head off those effects, but that basic regulations like the SEC’s proposed rules for predictive analytics can help. Gensler emphasized what he has said in other contexts that “fraud is fraud.” Gensler did suggest that the SEC is especially concerned about AI’s potential regarding “faking the markets.”
During the brief Q&A, Gensler was asked a related question about whether small companies that hype their AI products and services could be teetering on the verge of making misrepresentations. Gensler emphatically said, “don’t do it.” He added that companies should not “greenwash,” but they also should not “AI wash.” Gensler then repeated a familiar refrain that if someone offers and sells securities, then they are subject to the federal securities laws.
Kopecki had ended the moderated portion of the discussion by asking Gensler if a working group was in the offing. Gensler said the Financial Services Board (FSB) had AI on its 2024 agenda and that the SEC was looking to collaborate with other U.S. and global regulators on AI, but that he did not want to front run anything that U.S. Treasury Secretary Janet Yellen, as the head of the Financial Stability Oversight Council (FSOC), may be working on in the AI space.
Gensler earlier had indicated that the SEC’s next steps on AI would be to sort through the many public comments on its predictive analytics proposal.