By
Mark S. Nelson, J.D.The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held its third public hearing on the potentially far-reaching implications of artificial intelligence (AI) on American society and the world, especially from large language models (LLMs) such as OpenAI’s Chat-GPT, a form of generative AI. European regulators have sprinted ahead of the U.S. in proposing rules for AI, but U.S. lawmakers may be seeking a somewhat different path to AI regulation than their global counterparts.
At present, Congress has proposed two frameworks, one comprehensive regulatory bill, and a smattering of more focused bills addressing a range of topics from elections to water marks to national security. In addition to the
NIST AI Risk Management Framework, the White House has published a
Blueprint for an AI Bill of Rights, and has issued
voluntary AI guidelines, to which an additional eight AI firms
expressed their commitment shortly before the latest Senate subcommittee hearing on AI. The Senate subcommittee also is expected to hold private sessions with AI firms this week.
Legislative options to date. To put the Senate Judiciary Committee’s subcommittee’s work in context, it is helpful to review what U.S. lawmakers have proposed thus far regarding regulation of AI. According to the Subcommittee on Privacy, Technology, and the Law’s Chair, Sen. Richard Blumenthal (D-Conn), the subcommittee’s work is “complementary” to other work streams currently in progress, including that of a bipartisan working group led by Senate Majority Leader Chuck Schumer (D-NY).
Given that the Schumer group and the Subcommittee on Privacy, Technology, and the Law have both offered bipartisan AI regulatory frameworks, it makes sense to begin with a comparison of those frameworks. The
SAFE Innovation Framework, published by the Schumer group, seeks to ensure America’s national security, promote responsible development of AI systems, and preserve democracy in the face of the potential use of AI to manipulate democratic processes. The Schumer framework is short on details and does not appear on its surface to contemplate a singular AI regulatory agency.
By contrast, the
Bipartisan Framework for U.S. AI Act, published by Sen. Blumenthal and the Subcommittee on Privacy, Technology, and the Law’s Ranking Member Josh Hawley (R-Mo), does contemplate an independent oversight body that would administer a registration and licensing regime for the most powerful AI products. The Blumenthal-Hawley framework would provide for the oversight body to bring enforcement actions for violations of the law and would allow for private lawsuits. The Blumenthal-Hawley framework also would address national security, international competition, transparency, and the protection of children and consumers.
Moreover, the Blumenthal-Hawley framework would clarify that Section 230 of the Communications Decency Act, the statue that immunizes Internet and social media platforms from most lawsuits over third-party posts, would not apply in the context of AI. That has been a growing question of concern now that the Supreme Court essentially dodged the question by
remanding a recent case to lower courts while expressing doubts about whether the plaintiff could leverage Section 230 regarding video posts they said promoted terrorism, resulting in an attack that killed their loved ones (See also the
companion case). During oral argument in that case, Justice Gorsuch questioned whether Section 230 would apply to AI: “I mean, artificial intelligence generates poetry, it generates polemics today. That -- that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected,” said Justice Gorsuch (See
oral argument transcript at p. 49). Another recent development in which the Fifth Circuit
held that individuals and two states had standing to sue the federal government over the government’s attempts to police social media could cast a pall over some aspects of AI regulation if that decision were to be upheld by the Supreme Court, assuming that the government decides to appeal. Senator Hawley's separate bill, the No Section 230 Immunity for AI Act (
S. 1993), would deny Section 230 immunity for interactive computer services’ use of generative AI.
Other legislative options exist, including the
reintroduced Digital Platform Commission Act of 2023 (
S. 1671), sponsored by Sens. Michael Bennet (D-Colo) and Peter Welch (D-Vt), which would establish a commission to broadly regulate digital platforms and AI. Short of creating a new federal agency, several bills would target specific AI problems.
The Protect Elections from Deceptive AI Act (S. 2770), sponsored by Sen. Amy Klobuchar (D-Minn) and co-sponsored by Sens. Hawley, Chris Coons (D-Del), and Susan Collins (R-Maine), would bar the use of materially deceptive AI in federal elections and would provide a mode for such content to be taken down and for affected candidates to seek damages in federal courts.
During the Q&A with the subcommittee’s witnesses, Sen. John Kennedy (R-La) suggested yet another plausible response by lawmakers. According to Senator Kennedy, most senators think AI can make lives better if it does not make them worse first. He predicted that Congress was more likely to take baby steps toward regulation of AI rather than make a grand legislative bargain.
Act with “dispatch”—setting the stage for debate. The opening statements from lawmakers and the three witnesses, William Dally (Chief Scientist and Senior Vice President of Research, NVIDIA Corporation), Brad Smith (Vice Chair and President, Microsoft Corporation), and Woodrow Hartzog (Professor of Law at Boston University School of Law and a Fellow at the Cordell Institute for Policy in Medicine & Law at Washington University) suggest where each is coming from in their views of AI.
Senator Blumenthal noted the need to balance encouragement of new technologies with safeguards around trust and confidence in order to address the technology industry’s “deep appetite” for guardrails and its desire to use AI. Senator Blumenthal also suggested the pace at which he believes Congress must act to address the risks of AI by stating lawmakers must act with “dispatch” that is “more than just deliberate speed.” Senator Blumenthal also struck a theme common among lawmakers from both major parties: “…If we let this horse get out of the barn, it will be even more difficult to contain than social media.”
Ranking member Hawley said AI is both “exhilarating” and horrifying” and suggested that lawmakers not make the same regulatory mistakes that were made during the early years of the Internet and social media.
Dally’s company makes a widely used, massively parallel graphics processing unit or GPU (as compared to the serial central processing unit or CPU) that was initially used in video gaming but has come to have other uses, such as for building virtual currency mining rigs just a few years ago (miners raided game consoles for GPUs), to AI applications requiring ever-increasing amounts of processing power. Dally
testified that while AI has escaped its figurative bottle, he is confident that humans will always determine how much discretion to grant to AI applications. Dally also said that “frontier models,” those next-generation models that are much bigger than anything available today and which may enable artificial general intelligence that rivals or exceeds human capabilities, remain “science fiction.” Dally also said that no company or nation state currently has the ability to impose a “chokepoint” on AI development. According to Dally, AI regulation must be the product of a multilateral and multi-stakeholder approach.
Smith, whose company is a major investor in OpenAI, the creator of Chat-GPT, characterized the Blumenthal-Hawley framework in both his
prepared remarks and in his oral testimony as “a strong and positive step towards effectively regulating AI.” For Smith, effective AI regulation would have multiple components: (1) making safety and security a priority; (2) creating an agency with oversight responsibilities to administer a licensing regime for high-risk use cases; (3) implementing a set of controls to ensure safety such as those announced by the White House and which a number of AI firms have voluntarily committed to follow; (4) prioritizing national security; and (5) taking steps to address how AI affects children and other legal rights. Smith also suggested that the federal government follow the lead of California and several other states that seek to study how AI can make government more efficient and more responsive to citizens (e.g., this month California Governor Gavin Newsom
signed an AI
executive order calling on state agencies to assess the state’s use of AI in government).
By way of further background, federal lawmakers have mulled legislation similar to the California executive order. For example, the AI Leadership To Enable Accountable Deployment (AI LEAD) Act (
S. 2293), sponsored by Sen. Gary Peters (D-Mich), would require the Director of the Office of Management and Budget to establish the Chief Artificial Intelligence Officers Council for purposes of coordinating federal agencies’ best practices for the use of AI.
As the one academic on the panel, Hartzog sought to debunk what he viewed as myths about AI regulation. Central to his
thesis is the notion that regulatory half measures may be necessary but are not sufficient to adequately control the deployment of AI. Hartzog was especially critical of regulators’ reliance on transparency (insufficient accountability), bias mitigation (“doomed to failure” because fair is not the same as safe and the powerful will still be able to dominate and discriminate), and the adoption of ethical principles (ultimately, there is no incentive for industry to leave money on the table for the good of society). For Hartzog, it will be critical for lawmakers to incorporate the design aspects of AI into any laws and regulations and for them to not be beguiled by the concept that the rise of AI is inevitable when tools can be adopted to ensure human control of AI.
Deep fakes. The bulk of the questioning from senators emphasized how AI might be used in election ads and disinformation campaigns. Questions from Sen. Klobuchar emphasized the need for watermarks or other indicia that something was generated by AI, especially in the context of elections. Smith replied to this line of questioning by stating that indicators of provenance and watermarks could label legitimate content as a means of addressing the problem of deep fakes. According to panelist Hartzog, bright line rules are needed because procedural rules only give the veneer of protection; Hartzog would add abusive practices to the list of unfair or deceptive practices that should be the subject of AI regulation. Dally clarified that regulators would need indicators of both provenance and watermarks to prevent deep fakes because provenance is the flipside of watermarks (i.e., one identifies source, the other identifies something as having been created by AI).
Senator Klobuchar has
introduced the Require the Exposure of AI–Led Political Advertisements (REAL Political Advertisements) Act (
S. 1516), which would require that political ads state in a clear and conspicuous manner whether the ads include any images or video footage generated by AI. The House version of the bill (
H.R. 3044) is sponsored by Rep. Yvette Clarke (D-NY).
In related questioning, Sen. Mazie Hirono (D-Hawaii) asked what could be done about political influence campaigns and other efforts to spread misinformation and disinformation via AI. She gave the example of alleged foreign influence campaigns in the aftermath of the Lahaina wildfire disaster on the Hawaiian island of Maui. According to the senator, online information falsely told residents not to sign up for FEMA assistance.
Smith agreed with the senator that half measures in this sphere are not enough. According to Smith, AI should be used to detect such disinformation campaigns and Americans must stand up as a country to set red lines regardless of how much else we disagree about.
In still more questions about the right to know if something was created by AI, Sen. Kennedy tried to break this question into two: one about the AI origins of something, and another question about the source. With respect to knowing something was produced by AI, Dally agreed that information should be disclosed. Smith answered in a somewhat more nuanced manner by distinguishing between a first draft of a document (no disclosure) and the final output with the human author’s own finishing touches (disclosure). Hartzog’s answer was even more qualified; he suggested that if there is a vulnerability to AI, then disclosure should be required.
On the question of whether the source of an AI generated text should be disclosed, there was a seeming consensus among the three panelists that there may be good reasons to protect some anonymous speech. Dally said this was a harder question. Smith said he believed that generally one should say if AI created something and who owns it, but he also asked rhetorically whether disclosure would be appropriate in this context for something like the Federalist Papers, the combined unofficial explanation and marketing pamphlet known to have been authored by James Madison, John Jay, and Alexander Hamilton under the pen name Publius that sought to convince skeptics to support the 1887 constitution under which Americans still live. Hartzog suggested that there may be times to protect anonymous speech and other times when such speech might not be protected.
Questions from Sen. Marsha Blackburn (R-Tenn) asked about Chinese influence and the potential for adapting the Society for Worldwide Interbank Financial Telecommunication (SWIFT) network, which provides secure financial messaging for banks and other financial institutions, in the AI context. Smith replied that the U.S. can use export controls to ensure that products and services are not used by foreign governments. Smith also suggested in both his oral testimony and in his prepared remarks, where he offers greater detail, that AI regulations could include the AI equivalent of “know-your-customer” regulations familiar to banks along with some AI-specific rules akin to a “know-your-cloud” requirement. That would require the developers of the most powerful AI systems to know who operates the cloud computing platform on which they deploy their AI systems and to only use licensed cloud infrastructure.
Both before and after Smith’s exchange with Sen. Blackburn, Senator Hawley pressed Smith and Microsoft on issues related to children and China. First, Sen. Hawley observed that the age to use Microsoft’s Bing Chat was only 13 years old. The senator asked Smith to commit to raising the age and adopting stronger verification procedures. Smith answered that the first rule was not to make any news without first talking to stakeholders at Microsoft. Smith agreed that children should use AI in a safe way and that other countries, such as South Korea, sought to create AI apps to teach math, coding, and English. Although he never gave any specific commitments to Sen. Hawley, Smith agreed to raise the issue of minimum age requirements at Microsoft.
Senator Hawley also raised questions about Microsoft’s business ties to China, including Microsoft serving as an “alma mater” for Chinese AI scientists. The senator suggested that Microsoft could decouple from China in the interests of American national security. Smith noted that Microsoft is a large company with many years in business and that it had trained persons who now work around the world. Smith also said he would prefer that an American company doing business in China would be able to use Microsoft tools rather than tools developed in a foreign country.
What to regulate? A common question at the first AI hearing held by Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law was what regulatory threshold to employ to determine whether an AI firm, product, or service should be regulated. OpenAI’s Sam Altman suggested the possibility of using the computer as a threshold. At the third, and most recent, Senate AI hearing, a similar theory emerged that is based on whether the AI system is highly capable or less capable.
Senator Jon Ossoff (D-Ga) said the lawmakers must define in a legislative text what would trigger regulation. In the AI context, the senator said that could be the scope of the technology, a product, or a service.
For Smith, regulations would need to address three layers; (1) the model layer (e.g., frontier or foundational models); (2) the application layer where people directly interact with AI (enforce existing laws equally against those who deploy AI); and (3) the cloud layer, which is more powerful than the ubiquitous data center (licensing would be key).
Senator Ossoff also asked if there is a power at which one should draw the line for deployment. Dally suggested balancing risks against innovation and then regulating things with high risk if the AI system were to go awry.
All three panelists answered Sen. Ossoff’s question about how important international law will be to regulating AI. According to Hartzog, the U.S. could create an AI regulation that is compatible with similar E.U. regulations. Smith said the U.S. will need international law but cautioned that such solutions were likely to come from like-minded governments rather than global regulations. Dally observed that AI models are portable on large USB devices, thus suggesting that AI models cannot be contained within any one country.
That last observation about the portability of AI models was immediately preceded by a colloquy between Sen. Ossoff and Smith about the global proliferation of AI. The senator had asked what should be done about a less powerful AI model that nevertheless is capable of having a big impact. Smith acknowledged that this is a critical question and likened the problem to an AI system that can only do a few things well versus Chat-GPT, which can do many things well. Smith said licensing of deployment would be needed, something he analogized to the building of an aircraft—the building phase is not as regulated as the phase at which the aircraft is going to be flown. Smith said licensing depends on a combination of industry standards, national regulations, and international coordination.
Senator Blumenthal picked up on the proliferation issue in the closing moments of the hearing. The senator suggested the atomic energy and civil aviation models as worthy of consideration in the AI context. Both models depend on governments to cooperate, in the one case, by limiting access to nuclear materials and in the other to have common protocols for routing aircraft across national borders.
Smith noted in this context that the U.S. leads in GPUs, the cloud, and foundation models and that export controls may be needed. Senator Blumenthal noted that the nuclear proliferation model uses a combination of export controls, sanctions, and safeguards to achieve safety and that the Biden Administration has used existing laws to ban the sale of some high-performance chips to China. Dally then suggested that because there is not a real chokepoint for AI, companies around the world can get AI chips (i.e., if they cannot get chips in the U.S., they will get them elsewhere). Dally also observed that software may be more important than chips.