Thursday, January 11, 2024

Tech advisory committee begins to mull impact of AI

By Mark S. Nelson, J.D.

The CFTC’s Technology Advisory Committee (TAC) met this week to consider multiple emerging issues, including how commodities regulators should address the impact of artificial intelligence (AI) on commodities markets. TAC sponsor CFTC Commissioner Christy Goldsmith Romero called for greater scrutiny of AI in commodities markets and suggested the need for new CFTC enforcement authorities akin to those recently proposed in legislation that would enhance SEC penalties for securities violations based on deep fakes. Although the TAC conducted other business at the meeting, its several segments on AI regulation took center stage.

In opening remarks to the TAC, Goldsmith Romero said there must be accountability for humans and organizations that design and deploy AI tools. She also said transparency is key to detecting potential negative outcomes before they can result in harm.

Goldsmith Romero also posed the question whether the TAC should recommend to the Commission that entities regulated by the CFTC be subject to a set of best practices, perhaps similar to those contained in the National Institute of Standards and Technology's (NIST) AI Risk Management Framework.

But Goldsmith Romero further hinted at the need for regulators like the CFTC to consider how existing authorities may address AI. “The potential impact of generative AI on financial markets cannot be fully known,” said Goldsmith Romero. “But that does not mean that regulators cannot start to consider guardrails to ensure that AI innovation is responsible.”

To this end, Goldsmith Romero suggested that the CFTC may need some additional authorities akin to those proposed in recent legislation for the SEC. In a bill introduced at the end of 2023 and sponsored by Sens. Mark Warner (D-Va) and John Kennedy (R-La), the lawmakers proposed to give the SEC authority to impose treble penalties on individuals and entities that engage in violations of federal securities laws that are predicated on deep fakes. The legislation also would establish a presumed mental state for these violations (e.g., scienter or negligence) that could only be avoided if a deployer of AI tools had maintained written policies and procedures reasonably designed to prevent securities violations.

In the day’s first AI presentation, Elizabeth Kelly, Special Assistant to the President for Economic Policy, White House National Economic Council, spoke to the TAC about the outlines of the Biden Administration’s Executive Order on AI. According to Kelly, the overarching goal of the EO is to foster the benefits of AI while mitigating the associated risks of AI. Kelly said AI can offer benefits related to drug discovery, climate change, the underwriting of loans, auto safety, and medical records. But on the other side, AI risks include the use of AI to make existing risks worse, especially regarding discrimination. Other risks involve lulling people into a false sense of security or complacency, the use of personal data without consent, threats to democratic forms of government, bio-terror threats, and the potential for surveillance that undermines privacy. Kelly suggested that bias and fraud detection are key AI topics for financial markets.

During questioning from TAC members, Kelly addressed several issues, including data access and who within the federal government should regulate AI. On the first topic, Goldsmith Romero asked about how one ensures access to data. Kelly replied that the topic divides into two parts, the first being consumers’ right to protect their data (by extension, Kelly said this exemplifies why Congress needs to enact the data privacy legislation that President Biden called for when he issued the Administration’s EO on AI). The second part of the answer, Kelly said, is ensuring that AI startups have access to data because it is costly to build large language models and the Administration does not want a scenario where a few companies crowd out others.

With respect to who should regulate AI, a question from a TAC member pondered the prospect of a single federal AI or technology regulator. Kelly replied that that is a question for Congress and that the EO is focused on the federal government using its existing tools.

In a second presentation, Michael Wellman, the Lynn A. Conway Professor of Computer Science & Engineering at the University of Michigan, discussed why the financial sector is potentially the key to regulation of AI as well as the likely impact of the latest forms of AI.

According to Wellman, finance may hold the keys to initial attempts to regulate AI because of several characteristics of financial markets, including that finance is a crucial economic sector, financial markets can be fragile (i.e., there are interdependencies and a great dependence on information, beliefs, and expectations), AI already has “infiltrated” financial markets, and financial markets already have an existing regulatory infrastructure that may allow financial regulators to lead in AI regulation.

Wellman also noted that the latest versions of AI may allow for increased scope of action and autonomy over previous AI models. Later in his presentation, Wellman would add that the current direction of AI development may lead to a scenario in which the ownership of information becomes concentrated (Goldsmith Romero had asked about this in a slightly different way during Kelly’s presentation discussed above). Wellman noted that those who have the most information likely will have the best AI.

Wellman further discussed the role that intent plays in the regulation of financial markets, especially regarding market manipulation and spoofing, both of which have legal intent elements. He said intent is difficult to establish when humans are the accused manipulators and spoofers, but he raised the question of how intent would be established when a computer makes decisions about trading. Wellman suggested a future state in which regulators use detector technologies to monitor for and ferret out manipulation but in which the detectors are opposed by evader technologies that seek to deceive the detectors (i.e., “adversarial learning”). Wellman said this could beget an AI “arms race” in which the possibility would exist for the development of “super manipulators.”

Near the end of his presentation, Wellman noted that the Warner-Kennedy bill proposing to give the SEC the ability to impose treble penalties on those who would use AI to violate federal securities laws could help with the intent element. As mentioned above, the bill also would specify a presumed mental state for such securities violations and that provision, at least in the federal securities law context, could reduce the possibility that AI generated trading that is manipulative would fall into a legal loophole.