Showing posts with label fintech. Show all posts
Showing posts with label fintech. Show all posts

Wednesday, May 22, 2024

CFTC keeps a wary eye on AI in financial markets

By Lene Powell, J.D.

While artificial intelligence offers dazzling new capabilities for financial market participants, AI also raises concerns about opaque “black box” technology and potential market volatility, fraud, and manipulation.

In a new Vital Briefing, Wolters Kluwer Senior Legal Analyst Lene Powell details CFTC actions to build an AI governance framework as it looks to address risks while avoiding heavy-handed regulation.

Read the full briefing and other securities news from Wolters Kluwer at VitalLaw.com.

Thursday, May 02, 2024

Groups press CFTC on risks of AI in financial markets, caution on overregulation

By Lene Powell, J.D.

Accelerated use of “black box” artificial intelligence in financial markets could cause oversight challenges and market volatility, consumer groups warned in a comment letter to the CFTC. AI could also increase “herding” behavior and cybersecurity risk due to market dominance by a small number of AI technology companies. Better Markets exhorted the CFTC to address AI risks with “strong, targeted rules, aggressive enforcement, and ample expertise and resources.”

But industry groups FIA, SIFMA, and the U.S. Chamber of Commerce urged the CFTC to exercise regulatory restraint, saying that financial markets have been using AI for some time and existing regulatory frameworks have successfully adapted to technological change in the past. The associations asked the CFTC to take a principles-based approach and engage further with market participants.

The feedback was in response to a CFTC request for comment on AI use and risks in CFTC-regulated markets. The CFTC is collecting information as part of a broader staff effort to monitor the adoption of AI in CFTC-regulated markets, including machine learning and other uses of automation.

Read the rest of the story and other securities news from Wolters Kluwer at VitalLaw.com.

Tuesday, April 30, 2024

CFTC Commissioner Johnson urges vigilance in addressing AI integration risks at FIA L&C Panel

By Elena Eyber, J.D.

CFTC Commissioner Kristin N. Johnson delivered opening remarks at the FIA L&C Panel by emphasizing the significance of ongoing dialogues about the impact of emerging technologies like generative AI on financial markets. While these innovations hold promise for advancements in various sectors, Johnson cautioned against overlooking the potential risks associated with their integration without proper oversight.

Johnson highlighted three key interventions she has advocated for as a Commissioner. First, Johnson emphasized the need for heightened penalties for fraudulent or manipulative use of AI in financial markets, stressing the importance of adapting surveillance technologies to keep pace with market evolution. Second, Johnson called for a principles-based regulatory framework to address the increasing prevalence of AI, citing the CFTC's initiatives to understand AI's uses and potential risks. Third, Johnson proposed the establishment of an inter-agency task force to harmonize safeguards and ensure market stability and integrity.

Read the rest of the story and other securities news from Wolters Kluwer at VitalLaw.com.

Monday, April 08, 2024

Better Markets takes on detractors of SEC’s AI-focused predictive analytics proposals

By John Filar Atwood

Better Markets has written to the SEC to offer a defense of the agency’s predictive analytics proposals against critics who claim the proposals are unnecessary, flawed, and overbroad. The group said that the proposals are necessary to ensure the securities laws keep pace with innovations, especially the growing use of AI in the securities industry.

The rules proposed last July would require firms to identify and eliminate any conflicts of interest arising from the use of covered technologies, and to adopt appropriate policies, procedures, and recordkeeping measures. The foundation of the proposals is the increasing use of AI-based predictive analytics to direct individual investor behavior, and digital engagement practices like behavioral prompts and game-like features to engage retail investors when using a firm’s digital platforms for trading, advice, and financial education.

The proposals have generated some pushback from stakeholders, including a divided SEC Investor Advisory Committee which recommended in March that the Commission scale back the proposals by narrowing some proposed definitions and increasing the focus on disclosing conflicts of interest.

Better Markets sees three primary objections to the proposal: 1) the proposal is unnecessary because existing rules address conflicts of interest; 2) the proposal is flawed because it goes beyond requiring the disclosure of conflicts of interest; and 3) the proposal is overbroad because it covers even mundane uses of technology. The group believes these criticisms lack merit.

Unnecessary. According to Better Markets, the problem with the argument that the proposal is unnecessary because existing rules address conflicts of interest is that those rules only cover recommendations. Regulation Best Interest, for example, “requires broker-dealers in making recommendations to have a reasonable basis for believing that a series of recommended transactions.” Absent a recommendation, Reg. BI’s duties do not apply, the group noted. The proposed rules are needed, therefore, to eliminate the conflicts that arise when brokers use predictive analytics in a way that produces de facto recommendations and that induces investors to engage in a series of transactions that are not in their own interest, the group stated.

Better Markets also argued that AI-based practices in the securities industry and elsewhere use behavioral psychology to entice users into frequent usage. The group cited lawsuits in other industries claiming that hidden algorithms are manipulating users to keep them hooked on the app they are using. Accordingly, the SEC’s proposed rules are needed to prevent broker-dealers from using predictive data analytics, digital engagement practices, and gamification to similarly turn retail investors into investing addicts.

Flawed.
To the claim that the proposals are flawed because they should require only that conflicts of interest from the use of predictive data analytics be disclosed, Better Markets said there are many reasons why a disclosure-based regime is ill-suited to protect retail investors. To begin with, the group said, retail investors tend not to read disclosures and have reduced time, resources, and capacity to understand and use any disclosures relative to their sophisticated counterparts. Incremental increases in disclosure will not necessarily lead to better decision making, the group stated.

The group cited academic studies that found how simple disclosure obligations may not be sufficient for machine learning algorithms, and that when it comes to conflicts that arise as a result of firms using technology in their interactions with investors, disclosure alone will not be an effective solution.

This is why Better Markets disagrees with the recommendation of the SEC’s Investor Advisory Committee to allow firms to disclose the existence of conflicts of interest with respect to their use of some technologies rather than eliminate those conflicts. The group agrees with another recent study that suggested this approach would leave investors exposed to predatory behavior on the part of firms who might draft or time their disclosures in a way that would cause investors to overlook or misunderstand them, particularly when the underlying product or service is complex.

Better Markets urged the SEC to retain the proposal’s requirement that firms eliminate or neutralize the conflicts of interest arising from their use of certain technology in their interactions with investors because disclosure alone is an insufficient tool to address them.

Overbroad. The group also urged the Commission not to be persuaded by claims that the proposal is overbroad because it applies to innumerable functions that are necessary to support day-to-day operations of broker-dealers and investment advisers. The group noted that the proposal clearly states that it applies only to “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.”

The proposal applies to technologies that have the potential to lead to conflicts of interest in investor interactions, according to Better Markets, and does not apply to a firm’s use of mundane technologies such as spreadsheets. The proposal’s concern is with the conflicts arising from the use of the specified technology, and there is nothing overly broad about requiring that firms use technological advancements in a way that does not prioritize their own interests over the interests of the investors, the group concluded.

Thursday, March 28, 2024

Rule amendments address investment advisers operating exclusively over the Internet

By R. Jason Howard, J.D.

The SEC has voted to modernize a 22-year-old rule by adopting amendments that apply to when investment advisers who provide advisory services exclusively over the internet can register with the SEC.

In July 2023, the Commission voted 5-0 to issue the internet advisers proposal and, at that time, SEC Chair Gary Gensler explained that in 2002 the SEC granted a narrow exception allowing internet-based advisers to register with the Commission instead of with the states. But in 21 years a lot has changed, and the 2002 exemption created gaps in 2023. The changes, according to a recent statement by the Chair, better reflect what it means in 2024 to provide an exclusively internet-based service and will “better align registration requirements with modern technology and help the Commission in the efficient and effective oversight of registered investment advisers.”

According to the SEC press release, the final rule will require an investment adviser who relies on the internet adviser exemption “to have at all times an operational interactive website through which the adviser provides digital investment advisory services on an ongoing basis to more than one client.” In addition, the amendments will eliminate the current rule’s “de minimis exception for non-internet clients, thus requiring an internet investment adviser to provide advice to all of its clients exclusively through an operational interactive website.”

In connection with the adoption of the final rule, the SEC has released a Fact Sheet which, among other things, explains that compliance with the rule, including the requirement to amend Form ADV to include a representation that the adviser is eligible to register with the Commission under the internet adviser exemption, must be done by March 31, 2025.

Advisers that are no longer eligible to rely on the amended exemption and that do not otherwise have a basis for registration with the Commission must register in one or more states and withdraw registration with the Commission by filing Form ADV-W by June 29, 2025.

Tuesday, March 12, 2024

Investor Advisory Committee recommends scaling down SEC’s predictive analytics proposal

By Lene Powell, J.D.

The SEC Investor Advisory Committee (IAC) adopted a recommendation for the SEC to scale back proposed rules on digital engagement practices. A majority of the committee recommended that the SEC narrow some proposed definitions and increase focus on disclosing conflicts of interest.

Some members voted against the IAC’s recommendation, believing that it unduly shifts the focus to disclosing rather than neutralizing conflicts of interest.

Proposed rules. The SEC’s predictive data analytics proposal, released last July, would require firms to identify and eliminate any conflicts of interest arising from the use of covered technologies, as well as implement policies and procedures and keep certain records.

According to the proposal, firms are increasingly using a type of artificial intelligence called predictive data analytics (PDA) to understand and direct individual investor behavior. Firms may also use digital engagement practices (DEPs) like behavioral prompts, differential marketing, game-like features, and other design features to engage retail investors when using a firm’s digital platforms for trading, roboadvice, and financial education.

These technologies can create conflicts of interest that place a firm’s interests ahead of investors’ interests, the SEC says. For example, firms could use PDA-like technologies to encourage investors to engage in activities like excessive or risky trading that are profitable for the firm but may increase investors’ costs, undermine performance, or expose investors to unnecessary risks.

The proposal has met intensely polarized feedback. Better Markets called the proposed rules “essential to protect investors,” while NASAA supported the proposal with some recommended changes. In contrast, major industry groups strongly criticized the proposal, including the U.S. Chamber of Commerce, Investment Company Institute, SIFMA, and Investment Adviser Association.

IAC recommendation. At a meeting on March 7, the IAC adopted a recommendation on the SEC’s proposal. Committee member Paul Roye, former SVP and senior counsel at Capital Research and Management Company, said the recommendation suggests these changes:
  • Narrow the scope of the definition of covered technologies to target the unique risk of predictive data analytics and artificial intelligence technologies;
  • Narrow the definition of investor interaction to include technologies that interact directly with investors or that aid in that interaction with investors;
  • Use the current definition of conflict of interest;
  • Use the existing framework to mitigate or eliminate conflicts of interest involving predictive data analytics and artificial and technology when disclosures are inadequate; and
  • Clarify the definition of what constitutes a recommendation under Regulation Best Interest.
Roye suggested the recommended changes would avoid unintended consequences and adverse effects on investors and not impede the adoption of new beneficial technologies.

Dissent. Two committee members said they could not support the IAC’s recommended changes.

SEC Investor Advocate Cristina Martin Firvida said if covered technologies were to be redefined to focus on the use of exceptionally complex and opaque technologies, then in her view firms should not be permitted to address conflicts of interest through disclosure alone. Rather, she supports requiring firms to eliminate conflicts or their effects when the conflicts are the result of covered technologies. This would build upon existing regulations and does not represent a dramatic departure from firms’ existing regulatory obligations, she said.

Leslie Van Buskirk, administrator of the Division of Securities, State of Wisconsin Department of Financial Institutions, said she “firmly opposes” elements of the recommendation. She outlined concerns including that the recommendation would undermine the primary benefit of the SEC's approach—that it would transition industry practices to addressing associated conflicts at the earliest opportunity.

Commissioner statements.
SEC Chair Gary Gensler said that while the use of AI can promote greater financial inclusion and enhanced user experience, it can also raise conflicts of interest, which the proposed rules address. Gensler has previously said that under current rules, brokers and advisers cannot address conflicts of interest through disclosure alone.

Commissioner Hester Peirce questioned why the recommendation was changed from an earlier draft, particularly regarding the earlier version’s emphasis on disclosure versus mitigation of conflicts.

Thursday, February 29, 2024

As AI usage soars, academics, legal experts look for regulation blueprints

By Suzanne Cosgrove

In As AI Usage Soars, Academics, Legal Experts Look for Regulation Blueprints, Wolters Kluwer legal analyst Suzanne Cosgrove looks at the current state of U.S. regulation—or the lack of it—that addresses the benefits and perils of artificial intelligence applications. While major industries race to implement AI and worry about being left behind, a parallel race is taking place among academics and legal experts, who are drawing up proposals that promote basic standards and principles to guard against the technology’s misuse. Legislators and regulators seem to agree that the need for an innovative set of AI rules is critical—and urgent.

To read the entire article, click here.

Monday, January 22, 2024

After three-year fight, Robinhood agrees to pay $7.5M fine, change gamification and cyber practices

By John Filar Atwood

Robinhood Financial ended its three-year fight with Massachusetts over its gamification and cyber practices by agreeing to pay a $7.5 million fine and to change its digital engagement and cyber practices. While it neither admitted nor denied the gamification-related violations, Robinhood did admit to the facts surrounding its data breach and agreed to undergo an independent review of its cybersecurity policies.

The consent order settles charges brought in 2020 by the Massachusetts Secretary of the Commonwealth that Robinhood used gamification strategies to attract and influence customers. The practices to which the Massachusetts Secretary objected included the use of confetti animation, digital scratch tickets, free stock rewards, and other game-like features to pressure customers to interact with the Robinhood app. The broker also used push notifications and most-popular lists to encourage frequent trades, according to the Massachusetts Secretary.

Impact on inexperienced investors. The Massachusetts Secretary found that based on these practices over 200 Robinhood customers with no self-reported investment experience averaged at least five trades per day on Robinhood’s platform, and at least 25 customers with no self-reported investment experience made at least 15 trades per day. Some inexperienced customers averaged 58 to 92 trades per day, according to the consent order.

Robinhood discontinued many of its digital engagement practices after Massachusetts filed its complaint. The broker ceased use of the digital confetti feature, the digital scratch-off ticket to reveal free stock rewards, and the use of the waitlist tapping feature for its cash management product. Robinhood also stopped using certain push notifications, including those with links to the Top Movers list and 100 Most Popular list.

Fiduciary rule upheld. At the same time, however, Robinhood sued to block the action against it. In the Suffolk Superior Court and later on appeal to the Massachusetts Supreme Judicial Court, the Massachusetts Secretary’s authority to enforce the Massachusetts Fiduciary Rule was upheld. Rather than appeal that ruling, Robinhood agreed to settle.

Better Markets hailed the settlement as a major victory for investors by holding Robinhood accountable for luring inexperienced investors into harmful trading activity. It added that the settlement vindicates the power of Massachusetts Fiduciary Rule to require brokers to act always in the best interest of investors without regard to the financial gain.

Cyber issues. Along with the gamification practices, the consent order relates to cybersecurity issues identified by the Massachusetts Secretary after a November 2021 data security breach. An unauthorized third party accessed Robinhood’s customer information through a voice phishing scam that convinced an agent to run remote access software on a Robinhood-issued laptop. According to the consent order, Robinhood devices did not block the installation of the unauthorized software, and the broker had no procedures in place to enable the agent to quickly report the breach.

Robinhood agreed to review and report on the sufficiency of user access controls, the sufficiency of controls on users’ ability to download third-party software, and the sufficiency of controls on users’ ability to access and download bulk-files. It also agreed to review the process for employees to report data breaches and other similar events.

Thursday, January 11, 2024

Tech advisory committee begins to mull impact of AI

By Mark S. Nelson, J.D.

The CFTC’s Technology Advisory Committee (TAC) met this week to consider multiple emerging issues, including how commodities regulators should address the impact of artificial intelligence (AI) on commodities markets. TAC sponsor CFTC Commissioner Christy Goldsmith Romero called for greater scrutiny of AI in commodities markets and suggested the need for new CFTC enforcement authorities akin to those recently proposed in legislation that would enhance SEC penalties for securities violations based on deep fakes. Although the TAC conducted other business at the meeting, its several segments on AI regulation took center stage.

In opening remarks to the TAC, Goldsmith Romero said there must be accountability for humans and organizations that design and deploy AI tools. She also said transparency is key to detecting potential negative outcomes before they can result in harm.

Goldsmith Romero also posed the question whether the TAC should recommend to the Commission that entities regulated by the CFTC be subject to a set of best practices, perhaps similar to those contained in the National Institute of Standards and Technology's (NIST) AI Risk Management Framework.

But Goldsmith Romero further hinted at the need for regulators like the CFTC to consider how existing authorities may address AI. “The potential impact of generative AI on financial markets cannot be fully known,” said Goldsmith Romero. “But that does not mean that regulators cannot start to consider guardrails to ensure that AI innovation is responsible.”

To this end, Goldsmith Romero suggested that the CFTC may need some additional authorities akin to those proposed in recent legislation for the SEC. In a bill introduced at the end of 2023 and sponsored by Sens. Mark Warner (D-Va) and John Kennedy (R-La), the lawmakers proposed to give the SEC authority to impose treble penalties on individuals and entities that engage in violations of federal securities laws that are predicated on deep fakes. The legislation also would establish a presumed mental state for these violations (e.g., scienter or negligence) that could only be avoided if a deployer of AI tools had maintained written policies and procedures reasonably designed to prevent securities violations.

In the day’s first AI presentation, Elizabeth Kelly, Special Assistant to the President for Economic Policy, White House National Economic Council, spoke to the TAC about the outlines of the Biden Administration’s Executive Order on AI. According to Kelly, the overarching goal of the EO is to foster the benefits of AI while mitigating the associated risks of AI. Kelly said AI can offer benefits related to drug discovery, climate change, the underwriting of loans, auto safety, and medical records. But on the other side, AI risks include the use of AI to make existing risks worse, especially regarding discrimination. Other risks involve lulling people into a false sense of security or complacency, the use of personal data without consent, threats to democratic forms of government, bio-terror threats, and the potential for surveillance that undermines privacy. Kelly suggested that bias and fraud detection are key AI topics for financial markets.

During questioning from TAC members, Kelly addressed several issues, including data access and who within the federal government should regulate AI. On the first topic, Goldsmith Romero asked about how one ensures access to data. Kelly replied that the topic divides into two parts, the first being consumers’ right to protect their data (by extension, Kelly said this exemplifies why Congress needs to enact the data privacy legislation that President Biden called for when he issued the Administration’s EO on AI). The second part of the answer, Kelly said, is ensuring that AI startups have access to data because it is costly to build large language models and the Administration does not want a scenario where a few companies crowd out others.

With respect to who should regulate AI, a question from a TAC member pondered the prospect of a single federal AI or technology regulator. Kelly replied that that is a question for Congress and that the EO is focused on the federal government using its existing tools.

In a second presentation, Michael Wellman, the Lynn A. Conway Professor of Computer Science & Engineering at the University of Michigan, discussed why the financial sector is potentially the key to regulation of AI as well as the likely impact of the latest forms of AI.

According to Wellman, finance may hold the keys to initial attempts to regulate AI because of several characteristics of financial markets, including that finance is a crucial economic sector, financial markets can be fragile (i.e., there are interdependencies and a great dependence on information, beliefs, and expectations), AI already has “infiltrated” financial markets, and financial markets already have an existing regulatory infrastructure that may allow financial regulators to lead in AI regulation.

Wellman also noted that the latest versions of AI may allow for increased scope of action and autonomy over previous AI models. Later in his presentation, Wellman would add that the current direction of AI development may lead to a scenario in which the ownership of information becomes concentrated (Goldsmith Romero had asked about this in a slightly different way during Kelly’s presentation discussed above). Wellman noted that those who have the most information likely will have the best AI.

Wellman further discussed the role that intent plays in the regulation of financial markets, especially regarding market manipulation and spoofing, both of which have legal intent elements. He said intent is difficult to establish when humans are the accused manipulators and spoofers, but he raised the question of how intent would be established when a computer makes decisions about trading. Wellman suggested a future state in which regulators use detector technologies to monitor for and ferret out manipulation but in which the detectors are opposed by evader technologies that seek to deceive the detectors (i.e., “adversarial learning”). Wellman said this could beget an AI “arms race” in which the possibility would exist for the development of “super manipulators.”

Near the end of his presentation, Wellman noted that the Warner-Kennedy bill proposing to give the SEC the ability to impose treble penalties on those who would use AI to violate federal securities laws could help with the intent element. As mentioned above, the bill also would specify a presumed mental state for such securities violations and that provision, at least in the federal securities law context, could reduce the possibility that AI generated trading that is manipulative would fall into a legal loophole.

Tuesday, December 12, 2023

Commissioners disagree over emphasis in IAC’s draft recommendation on digital engagement practices

By Mark S. Nelson, J.D.

The SEC’s Investor Advisory Committee recently met to consider a draft recommendation on digital engagement practices (DEPs). The committee recommended that the Commission seek to narrow a predictive data analytics (PDA) proposal that would address DEPs and DPAs, some of which incorporate artificial intelligence (AI), because the proposal’s key definitions may sweep too far and, thus, could limit investor choices if the PDA regulation were adopted. Three commissioners who attended the IAC meeting differed in public statements over whether the IAC’s draft recommendation adequately considered retail investors’ concerns.

IAC’s draft recommendation. The IAC surveyed the landscape for DEPs and PDAs over the past several years and reached some conclusions about whether the Commission should pursue the PDA proposal as drafted and whether existing regulations, such as Regulation Best Interest (BI) could help alleviate some of the problems posed by DEPs.

With respect to the Commission’s DPA proposal, the IAC noted that the terms “covered technology” and “investor interaction” sweep broadly and could include virtually all technologies and far more interactions with customers than is necessary to address the specific problems. A regulation that is overly prescriptive, said the IAC draft recommendation, could diminish the products and services available to investors.

The IAC also focused on the PDA proposal’s conflicts resolution approach, which requires the elimination or neutralization of the effect of conflicts, subject to some comparatively narrow exceptions. The IAC also observed that the PDA proposal does not limit its conception of a conflict to only those that are material. The IAC instead suggested that Regulation BI’s fully and fairly disclose approach coupled with that regulation’s mitigate or eliminate requirement would better serve investors in the DEP/PDA context.

As a result, the IAC draft recommended that the SEC staff narrow the PDA proposal to address PDAs and AI technologies that interact directly with investors.

The second portion of the IAC’s draft recommendation dealt with how best to leverage existing SEC regulations that can apply to DEPs and PDAs. Here, the IAC suggested that the modification of some existing rules might suffice. For example, the SEC could clarify “covered technologies” in the PDA proposal and further clarify what counts as a “recommendation” under Regulation BI. However, the IAC made clear that certain behaviors of broker-dealers and investment advisers should be deemed to be recommendations, such as DEPs that encourage trading.

Commissioner statements. Only three of the SEC’s five commissioners commented publicly about the IAC’s draft views on the PDA proposal. Thus far, industry comments on the PDA proposal have largely mirrored those of the IAC. The three commissioners who spoke were divided along the industry/IAC critique and the view that newer technologies must be more closely regulated.

Commissioner Jaime Lizarraga, for example, agreed that the IAC aptly described the problems that newer technologies pose to retail investors, but he disagreed with the IAC’s focus on the firm perspective when, in his view, the IAC should have given greater weight to the views of retail investors.

“That said,” Lizarraga stated, “the Subcommittee then pivots towards an extensive focus on the perspective of firms and takes a substantial leap to assert, without much substantive basis, that the Commission’s rule is ‘far-reaching’ with ‘potential adverse impact on investors if adopted.’” Lizarraga added: “I question these assertions. To me, the Commission’s proposal addresses the challenges highlighted by the IAC head on.”

Speaking to the conflict resolution portion of the PDA proposal, SEC Chair Gary Gensler noted a key similarity between the PDA proposal and Regulation BI—that firms not place their own interests ahead of those of retail investors. “I would note that under those current rules and interpretation, brokers and advisers cannot address these conflicts of interest through disclosure alone,” added Gensler. “There still is an obligation to act in an investor’s best interest, and not to place the broker or adviser’s interests ahead of the investor’s interests, even after the conflict has been disclosed.”

Commissioner Hester Peirce, however, echoed some of the themes raised by the IAC. Specifically, Peirce suggested an approach that is more grounded in the SEC’s existing rulebook. Said Peirce: “Should we put the rule on hold until we can conduct investor outreach? Would guidance suffice to clarify the application of existing rules to certain digital engagement practices? What would an appropriately narrowed rule look like?”

Thursday, December 07, 2023

SEC’s Gensler talks about ‘AI washing,’ related financial risks

By Mark S. Nelson, J.D.

Speaking at JAF Communications, Inc.’s nascent news venture The Messenger's livestreamed event “AI: Balancing Innovation & Regulation,” SEC Chair Gary Gensler took questions from moderator Dawn Kopecki, Deputy Business & Finance Editor at The Messenger, and then from the audience during a brief Q&A session. The focus of the discussion was what the SEC may be able to do to reduce the impact of bias and fraud in the marketplace that could arise as artificial intelligence (AI) tools become more available to the general public and ownership of the models driving these tools becomes more concentrated in only a few hands.

Moderator Kopecki opened by recalling a CNN headline that offered a laundry list of things that can go wrong with AI, such as AI outputs being racist, sexist, creepy, and the potential that AI could one day become self-aware, or otherwise lead to a financial crisis. Gensler replied that he sees AI risks along two planes: he said there are micro level risks such as bias and conflicts of interest (which an SEC proposal on predictive analytics seeks to address) as well as macro level risks related to the potential concentration of the ownership of AI models.

Gensler explained that with respect to macro level risks, the problem is that a natural economic progression can lead to the evolution of monocultures in which AI data sets or base models that traders and underwriters rely on are held by only a few companies (Gensler cited the FICO score as an example of a financial monoculture). The result could be that a hard-to-explain AI model could become problematic. Moreover, Gensler suggested that a herding effect potentially could lead markets over an inadvertent economic cliff. Gensler further explained that in the context of networked economies, there is a tendency for concentration to develop and that something akin to the concentration of cloud storage services in a few large companies could happen in the AI setting.

Kopecki then asked who prosecutes wrongdoing if AI goes awry? Gensler said that AI, as we know it, still has humans in the loop who set the hyperparameters (according to a Google Cloud document, “Hyperparameters contain the data that govern the training process itself” and can fine tune a model’s predictive accuracy). The immediately preceding question from Kopecki had asked if regulation can head off the bad effects of AI. Gensler responded that he was unsure if we can head off those effects, but that basic regulations like the SEC’s proposed rules for predictive analytics can help. Gensler emphasized what he has said in other contexts that “fraud is fraud.” Gensler did suggest that the SEC is especially concerned about AI’s potential regarding “faking the markets.”

During the brief Q&A, Gensler was asked a related question about whether small companies that hype their AI products and services could be teetering on the verge of making misrepresentations. Gensler emphatically said, “don’t do it.” He added that companies should not “greenwash,” but they also should not “AI wash.” Gensler then repeated a familiar refrain that if someone offers and sells securities, then they are subject to the federal securities laws.

Kopecki had ended the moderated portion of the discussion by asking Gensler if a working group was in the offing. Gensler said the Financial Services Board (FSB) had AI on its 2024 agenda and that the SEC was looking to collaborate with other U.S. and global regulators on AI, but that he did not want to front run anything that U.S. Treasury Secretary Janet Yellen, as the head of the Financial Stability Oversight Council (FSOC), may be working on in the AI space.

Gensler earlier had indicated that the SEC’s next steps on AI would be to sort through the many public comments on its predictive analytics proposal.

Thursday, November 16, 2023

Article explores Executive Order and other legislative options for regulating artificial intelligence in the U.S.

By Mark S. Nelson, J.D.

The release of OpenAI’s Chat-GPT nearly one year ago not only captivated the public imagination but, by summer 2023, also had captured the attention of lawmakers who fear a regulatory repeat of perceived errors made when Congress took a more hands-off approach to the Internet and social media platforms. Much of the legislative efforts on artificial intelligence (AI) thus far have centered on national security, the military, and elections. The Biden Administration’s executive order on AI, by contrast, includes eight substantive sections addressing a range of issues, including national security, but also AI safety, innovation and competition, job displacement, equity and civil rights, consumer protection, privacy, and the government’s own use of AI to deliver services to the public.

A new Vital Briefings article reviews key components of the AI executive order and outlines the legislative streams taking shape in Congress to bring government oversight to AI. The article notes, however, that existing guidance and frameworks should not be overlooked, nor should practitioners ignore developments in the European Union and in individual U.S. states, both of which could implement AI rules of the road that may apply to a significant number of U.S. entities.

The Vital Briefings article, “AI regulation in the U.S.: what it means for corporate and financial services practitioners,” is available here.

Thursday, September 14, 2023

Senate subcommittee takes yet another look at AI regulation

By Mark S. Nelson, J.D.

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held its third public hearing on the potentially far-reaching implications of artificial intelligence (AI) on American society and the world, especially from large language models (LLMs) such as OpenAI’s Chat-GPT, a form of generative AI. European regulators have sprinted ahead of the U.S. in proposing rules for AI, but U.S. lawmakers may be seeking a somewhat different path to AI regulation than their global counterparts.

At present, Congress has proposed two frameworks, one comprehensive regulatory bill, and a smattering of more focused bills addressing a range of topics from elections to water marks to national security. In addition to the NIST AI Risk Management Framework, the White House has published a Blueprint for an AI Bill of Rights, and has issued voluntary AI guidelines, to which an additional eight AI firms expressed their commitment shortly before the latest Senate subcommittee hearing on AI. The Senate subcommittee also is expected to hold private sessions with AI firms this week.

Legislative options to date. To put the Senate Judiciary Committee’s subcommittee’s work in context, it is helpful to review what U.S. lawmakers have proposed thus far regarding regulation of AI. According to the Subcommittee on Privacy, Technology, and the Law’s Chair, Sen. Richard Blumenthal (D-Conn), the subcommittee’s work is “complementary” to other work streams currently in progress, including that of a bipartisan working group led by Senate Majority Leader Chuck Schumer (D-NY).

Given that the Schumer group and the Subcommittee on Privacy, Technology, and the Law have both offered bipartisan AI regulatory frameworks, it makes sense to begin with a comparison of those frameworks. The SAFE Innovation Framework, published by the Schumer group, seeks to ensure America’s national security, promote responsible development of AI systems, and preserve democracy in the face of the potential use of AI to manipulate democratic processes. The Schumer framework is short on details and does not appear on its surface to contemplate a singular AI regulatory agency.

By contrast, the Bipartisan Framework for U.S. AI Act, published by Sen. Blumenthal and the Subcommittee on Privacy, Technology, and the Law’s Ranking Member Josh Hawley (R-Mo), does contemplate an independent oversight body that would administer a registration and licensing regime for the most powerful AI products. The Blumenthal-Hawley framework would provide for the oversight body to bring enforcement actions for violations of the law and would allow for private lawsuits. The Blumenthal-Hawley framework also would address national security, international competition, transparency, and the protection of children and consumers.

Moreover, the Blumenthal-Hawley framework would clarify that Section 230 of the Communications Decency Act, the statue that immunizes Internet and social media platforms from most lawsuits over third-party posts, would not apply in the context of AI. That has been a growing question of concern now that the Supreme Court essentially dodged the question by remanding a recent case to lower courts while expressing doubts about whether the plaintiff could leverage Section 230 regarding video posts they said promoted terrorism, resulting in an attack that killed their loved ones (See also the companion case). During oral argument in that case, Justice Gorsuch questioned whether Section 230 would apply to AI: “I mean, artificial intelligence generates poetry, it generates polemics today. That -- that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected,” said Justice Gorsuch (See oral argument transcript at p. 49). Another recent development in which the Fifth Circuit held that individuals and two states had standing to sue the federal government over the government’s attempts to police social media could cast a pall over some aspects of AI regulation if that decision were to be upheld by the Supreme Court, assuming that the government decides to appeal. Senator Hawley's separate bill, the No Section 230 Immunity for AI Act (S. 1993), would deny Section 230 immunity for interactive computer services’ use of generative AI.

Other legislative options exist, including the reintroduced Digital Platform Commission Act of 2023 (S. 1671), sponsored by Sens. Michael Bennet (D-Colo) and Peter Welch (D-Vt), which would establish a commission to broadly regulate digital platforms and AI. Short of creating a new federal agency, several bills would target specific AI problems. The Protect Elections from Deceptive AI Act (S. 2770), sponsored by Sen. Amy Klobuchar (D-Minn) and co-sponsored by Sens. Hawley, Chris Coons (D-Del), and Susan Collins (R-Maine), would bar the use of materially deceptive AI in federal elections and would provide a mode for such content to be taken down and for affected candidates to seek damages in federal courts.

During the Q&A with the subcommittee’s witnesses, Sen. John Kennedy (R-La) suggested yet another plausible response by lawmakers. According to Senator Kennedy, most senators think AI can make lives better if it does not make them worse first. He predicted that Congress was more likely to take baby steps toward regulation of AI rather than make a grand legislative bargain.

Act with “dispatch”—setting the stage for debate. The opening statements from lawmakers and the three witnesses, William Dally (Chief Scientist and Senior Vice President of Research, NVIDIA Corporation), Brad Smith (Vice Chair and President, Microsoft Corporation), and Woodrow Hartzog (Professor of Law at Boston University School of Law and a Fellow at the Cordell Institute for Policy in Medicine & Law at Washington University) suggest where each is coming from in their views of AI.

Senator Blumenthal noted the need to balance encouragement of new technologies with safeguards around trust and confidence in order to address the technology industry’s “deep appetite” for guardrails and its desire to use AI. Senator Blumenthal also suggested the pace at which he believes Congress must act to address the risks of AI by stating lawmakers must act with “dispatch” that is “more than just deliberate speed.” Senator Blumenthal also struck a theme common among lawmakers from both major parties: “…If we let this horse get out of the barn, it will be even more difficult to contain than social media.”

Ranking member Hawley said AI is both “exhilarating” and horrifying” and suggested that lawmakers not make the same regulatory mistakes that were made during the early years of the Internet and social media.

Dally’s company makes a widely used, massively parallel graphics processing unit or GPU (as compared to the serial central processing unit or CPU) that was initially used in video gaming but has come to have other uses, such as for building virtual currency mining rigs just a few years ago (miners raided game consoles for GPUs), to AI applications requiring ever-increasing amounts of processing power. Dally testified that while AI has escaped its figurative bottle, he is confident that humans will always determine how much discretion to grant to AI applications. Dally also said that “frontier models,” those next-generation models that are much bigger than anything available today and which may enable artificial general intelligence that rivals or exceeds human capabilities, remain “science fiction.” Dally also said that no company or nation state currently has the ability to impose a “chokepoint” on AI development. According to Dally, AI regulation must be the product of a multilateral and multi-stakeholder approach.

Smith, whose company is a major investor in OpenAI, the creator of Chat-GPT, characterized the Blumenthal-Hawley framework in both his prepared remarks and in his oral testimony as “a strong and positive step towards effectively regulating AI.” For Smith, effective AI regulation would have multiple components: (1) making safety and security a priority; (2) creating an agency with oversight responsibilities to administer a licensing regime for high-risk use cases; (3) implementing a set of controls to ensure safety such as those announced by the White House and which a number of AI firms have voluntarily committed to follow; (4) prioritizing national security; and (5) taking steps to address how AI affects children and other legal rights. Smith also suggested that the federal government follow the lead of California and several other states that seek to study how AI can make government more efficient and more responsive to citizens (e.g., this month California Governor Gavin Newsom signed an AI executive order calling on state agencies to assess the state’s use of AI in government).

By way of further background, federal lawmakers have mulled legislation similar to the California executive order. For example, the AI Leadership To Enable Accountable Deployment (AI LEAD) Act (S. 2293), sponsored by Sen. Gary Peters (D-Mich), would require the Director of the Office of Management and Budget to establish the Chief Artificial Intelligence Officers Council for purposes of coordinating federal agencies’ best practices for the use of AI.

As the one academic on the panel, Hartzog sought to debunk what he viewed as myths about AI regulation. Central to his thesis is the notion that regulatory half measures may be necessary but are not sufficient to adequately control the deployment of AI. Hartzog was especially critical of regulators’ reliance on transparency (insufficient accountability), bias mitigation (“doomed to failure” because fair is not the same as safe and the powerful will still be able to dominate and discriminate), and the adoption of ethical principles (ultimately, there is no incentive for industry to leave money on the table for the good of society). For Hartzog, it will be critical for lawmakers to incorporate the design aspects of AI into any laws and regulations and for them to not be beguiled by the concept that the rise of AI is inevitable when tools can be adopted to ensure human control of AI.

Deep fakes.
The bulk of the questioning from senators emphasized how AI might be used in election ads and disinformation campaigns. Questions from Sen. Klobuchar emphasized the need for watermarks or other indicia that something was generated by AI, especially in the context of elections. Smith replied to this line of questioning by stating that indicators of provenance and watermarks could label legitimate content as a means of addressing the problem of deep fakes. According to panelist Hartzog, bright line rules are needed because procedural rules only give the veneer of protection; Hartzog would add abusive practices to the list of unfair or deceptive practices that should be the subject of AI regulation. Dally clarified that regulators would need indicators of both provenance and watermarks to prevent deep fakes because provenance is the flipside of watermarks (i.e., one identifies source, the other identifies something as having been created by AI).

Senator Klobuchar has introduced the Require the Exposure of AI–Led Political Advertisements (REAL Political Advertisements) Act (S. 1516), which would require that political ads state in a clear and conspicuous manner whether the ads include any images or video footage generated by AI. The House version of the bill (H.R. 3044) is sponsored by Rep. Yvette Clarke (D-NY).

In related questioning, Sen. Mazie Hirono (D-Hawaii) asked what could be done about political influence campaigns and other efforts to spread misinformation and disinformation via AI. She gave the example of alleged foreign influence campaigns in the aftermath of the Lahaina wildfire disaster on the Hawaiian island of Maui. According to the senator, online information falsely told residents not to sign up for FEMA assistance.

Smith agreed with the senator that half measures in this sphere are not enough. According to Smith, AI should be used to detect such disinformation campaigns and Americans must stand up as a country to set red lines regardless of how much else we disagree about.

In still more questions about the right to know if something was created by AI, Sen. Kennedy tried to break this question into two: one about the AI origins of something, and another question about the source. With respect to knowing something was produced by AI, Dally agreed that information should be disclosed. Smith answered in a somewhat more nuanced manner by distinguishing between a first draft of a document (no disclosure) and the final output with the human author’s own finishing touches (disclosure). Hartzog’s answer was even more qualified; he suggested that if there is a vulnerability to AI, then disclosure should be required.

On the question of whether the source of an AI generated text should be disclosed, there was a seeming consensus among the three panelists that there may be good reasons to protect some anonymous speech. Dally said this was a harder question. Smith said he believed that generally one should say if AI created something and who owns it, but he also asked rhetorically whether disclosure would be appropriate in this context for something like the Federalist Papers, the combined unofficial explanation and marketing pamphlet known to have been authored by James Madison, John Jay, and Alexander Hamilton under the pen name Publius that sought to convince skeptics to support the 1887 constitution under which Americans still live. Hartzog suggested that there may be times to protect anonymous speech and other times when such speech might not be protected.

Questions from Sen. Marsha Blackburn (R-Tenn) asked about Chinese influence and the potential for adapting the Society for Worldwide Interbank Financial Telecommunication (SWIFT) network, which provides secure financial messaging for banks and other financial institutions, in the AI context. Smith replied that the U.S. can use export controls to ensure that products and services are not used by foreign governments. Smith also suggested in both his oral testimony and in his prepared remarks, where he offers greater detail, that AI regulations could include the AI equivalent of “know-your-customer” regulations familiar to banks along with some AI-specific rules akin to a “know-your-cloud” requirement. That would require the developers of the most powerful AI systems to know who operates the cloud computing platform on which they deploy their AI systems and to only use licensed cloud infrastructure.

Both before and after Smith’s exchange with Sen. Blackburn, Senator Hawley pressed Smith and Microsoft on issues related to children and China. First, Sen. Hawley observed that the age to use Microsoft’s Bing Chat was only 13 years old. The senator asked Smith to commit to raising the age and adopting stronger verification procedures. Smith answered that the first rule was not to make any news without first talking to stakeholders at Microsoft. Smith agreed that children should use AI in a safe way and that other countries, such as South Korea, sought to create AI apps to teach math, coding, and English. Although he never gave any specific commitments to Sen. Hawley, Smith agreed to raise the issue of minimum age requirements at Microsoft.

Senator Hawley also raised questions about Microsoft’s business ties to China, including Microsoft serving as an “alma mater” for Chinese AI scientists. The senator suggested that Microsoft could decouple from China in the interests of American national security. Smith noted that Microsoft is a large company with many years in business and that it had trained persons who now work around the world. Smith also said he would prefer that an American company doing business in China would be able to use Microsoft tools rather than tools developed in a foreign country.

What to regulate? A common question at the first AI hearing held by Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law was what regulatory threshold to employ to determine whether an AI firm, product, or service should be regulated. OpenAI’s Sam Altman suggested the possibility of using the computer as a threshold. At the third, and most recent, Senate AI hearing, a similar theory emerged that is based on whether the AI system is highly capable or less capable.

Senator Jon Ossoff (D-Ga) said the lawmakers must define in a legislative text what would trigger regulation. In the AI context, the senator said that could be the scope of the technology, a product, or a service.

For Smith, regulations would need to address three layers; (1) the model layer (e.g., frontier or foundational models); (2) the application layer where people directly interact with AI (enforce existing laws equally against those who deploy AI); and (3) the cloud layer, which is more powerful than the ubiquitous data center (licensing would be key).

Senator Ossoff also asked if there is a power at which one should draw the line for deployment. Dally suggested balancing risks against innovation and then regulating things with high risk if the AI system were to go awry.

All three panelists answered Sen. Ossoff’s question about how important international law will be to regulating AI. According to Hartzog, the U.S. could create an AI regulation that is compatible with similar E.U. regulations. Smith said the U.S. will need international law but cautioned that such solutions were likely to come from like-minded governments rather than global regulations. Dally observed that AI models are portable on large USB devices, thus suggesting that AI models cannot be contained within any one country.

That last observation about the portability of AI models was immediately preceded by a colloquy between Sen. Ossoff and Smith about the global proliferation of AI. The senator had asked what should be done about a less powerful AI model that nevertheless is capable of having a big impact. Smith acknowledged that this is a critical question and likened the problem to an AI system that can only do a few things well versus Chat-GPT, which can do many things well. Smith said licensing of deployment would be needed, something he analogized to the building of an aircraft—the building phase is not as regulated as the phase at which the aircraft is going to be flown. Smith said licensing depends on a combination of industry standards, national regulations, and international coordination.

Senator Blumenthal picked up on the proliferation issue in the closing moments of the hearing. The senator suggested the atomic energy and civil aviation models as worthy of consideration in the AI context. Both models depend on governments to cooperate, in the one case, by limiting access to nuclear materials and in the other to have common protocols for routing aircraft across national borders.

Smith noted in this context that the U.S. leads in GPUs, the cloud, and foundation models and that export controls may be needed. Senator Blumenthal noted that the nuclear proliferation model uses a combination of export controls, sanctions, and safeguards to achieve safety and that the Biden Administration has used existing laws to ban the sale of some high-performance chips to China. Dally then suggested that because there is not a real chokepoint for AI, companies around the world can get AI chips (i.e., if they cannot get chips in the U.S., they will get them elsewhere). Dally also observed that software may be more important than chips.

Thursday, July 27, 2023

SEC proposes rules on predictive analytics, internet advisers

By Lene Powell, J.D.

The SEC issued two rulemaking proposals at an open meeting on Wednesday, July 26. One proposal would impose new requirements on the use of predictive analytics and similar technologies by broker-dealers and investment advisers. The other would impose new conditions on the “internet adviser” exemption from the prohibition on SEC registration for smaller investment advisers (Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker Dealers and Investment Advisers, Release No. 34-97990; Exemption for Certain Investment Advisers Operating Through the Internet, Release No. IA-6354, July 26, 2023).

The Commission voted 3-2 to issue the predictive analytics proposal and 5-0 to issue the internet advisers proposal.

Both proposals will be open for comment for 60 days following publication in the Federal Register.

Predictive analytics. According to SEC Chair Gary Gensler, modern predictive data analytics models allow an increasing ability to make predictions about each of us as individuals. This makes differential communications to individuals possible at scale.

While such “narrowcasting” can enhance user experience and promote financial inclusion, it is also potentially a problem. Advisers or brokers may use these technologies to optimize in a manner that places their interests ahead of investor interests, creating conflicts of interest and harming investors. And, such conflicts could manifest efficiently at scale, said Gensler.

“How might we respond to individualized communications or nudges?” asked Gensler. “How might we respond to individualized product offerings? How might we respond to individualized pricing? This includes means to optimize for, predict, guide, forecast, or direct investors’ investment decisions.”

Gensler cited flashing buttons and push notifications as examples of predictive data analytics: “the colors, the sounds, the well-engineered subtleties of modern digital life.”

As a fact sheet explains, the proposal would:
  • Require a firm to analyze and identify, then eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of covered technologies in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests;
  • Require a firm that has any investor interaction using covered technology to have written policies and procedures reasonably designed to prevent violations of (in the case of investment advisers) or achieve compliance with (in the case of broker-dealers) the proposed rules; and
  • Impose recordkeeping requirements related to the proposed conflicts rules.
Commissioner reactions. Gensler and Commissioners Caroline Crenshaw and Jaime Lizárraga voted to issue the proposal, while Commissioners Hester Peirce and Mark Uyeda voted against it.

Based on the Commission’s analysis and extensive public input, the current framework on conflicts of interest needs to be modernized in light of new technology, said Lizárraga.

Crenshaw pointed to recent developments in national policy on artificial intelligence.

“Acknowledging the considerable promise and great risk offered by artificial intelligence in particular, on Friday the White House announced that seven leading artificial intelligence companies in the United States have agreed to voluntary safeguards on the technology’s development, pledging to manage the risks posed by these new tools,” said Crenshaw. “It is similarly important for the SEC to build in safeguards to protect investors from the potential for harm posed by firms using new technologies that are optimized in a way that could place their interests ahead of investor interests.”

Peirce said the proposal amounted to “banning technologies we don’t like,” unduly rejected disclosure as an option, and inexplicably disregarded operational feasibility. Peirce asked 10 questions about costs, compliance period, and other areas.

“The rule appears to assume that AI is so complex it needs special rules. Aren’t humans even more complex?” asked Peirce.

Internet advisers. As Gensler explained, in 2002 the SEC granted a narrow exception allowing internet-based advisers to register with the Commission instead of with the states. But a lot has changed in 21 years, and the 2002 exemption creates gaps in 2023, he said.

According to a fact sheet, the proposal would:
  • Require an investment adviser relying on the exemption in 203A-2(e) under the Investment Advisers Act of 1940 to at all times have an operational interactive website through which the adviser provides investment advisory services on an ongoing basis to more than one client;
  • Eliminate the current rule’s de minimis exception for non-internet clients. This would have the effect of requiring that an internet investment adviser must provide advice to all of its clients exclusively through an operational interactive website.
Commissioner reactions. The Commission voted unanimously to issue the proposal.

Lizárraga said SEC staff has observed numerous instances where advisers do not operate nationally and have failed to qualify for SEC registration. In 2021, Examinations staff found that nearly half of firms relying on the exemption were ineligible to do so. He noted that the SEC has taken enforcement actions against several firms for these violations.

This is Release No. 34-97990 and Release No. IA-6354.

Tuesday, July 18, 2023

Gensler says AI has the potential to open pathway to greater financial inclusion

By Suzanne Cosgrove

SEC Chairman Gary Gensler was enthusiastic about the promise of AI in remarks Monday at the National Press Club luncheon, stating AI could open up opportunities from healthcare to science to finance, the SEC chairman said. As machines take on pattern recognition, AI can create greater efficiencies across the economy, he said.

“Text prediction in our mobile devices and emails has been commonplace for years,” he noted. “It’s being used for natural language processing, translation software, recommender systems, radiology, robotics, and your virtual assistant. … In finance, AI already is being used for call centers, account openings, compliance programs, trading algorithms, sentiment analysis, and more. It’s also fueled a rapid change in the field of robo-advisers and brokerage apps.”

Focused on outcomes, not tech. However, the SEC itself is technology neutral, Gensler said. The regulator is focused on the outcomes, rather than the tool itself. Within its current authorities, the Commission is focused on protecting against certain of its challenges, he said.

Those concerns are twofold, with the first related to “narrowcasting” or the micro abilities of AI-based models to make predictions about individuals. That ability raises issues, such as bias, that are not necessarily new to AI but are magnified by it, Gensler said.

“The challenges of explainability may mask underlying systemic racism and bias in AI predictive models,” he cautioned. “The ability of these predictive models to predict doesn’t mean they are always accurate or robust.

On guard against fraud. “As advisers and brokers incorporate these technologies in their services, the advice and recommendations they offer—whether or not based on AI—must be in the best interests of the clients and retail customer and not place their interests ahead of investors’ interests,” Gensler added.

“Since antiquity, bad actors have found new ways to deceive the public. With AI, fraudsters have a new tool to exploit,” he said. “They may try to do it in a narrowcasting way, zeroing in on our personal vulnerabilities.”

Social impact looms large. AI challenges also go beyond the scope of narrowcasting to take on a broader social significance, Gensler said. “Just as with historic transformative times of moving to more automation of the farm, factory, and services, there will be macro challenges for society in general,” he said.

“Given that today's AI relies on an insatiable demand for data and computational power, there can be economies of scale and data network effects at play, Gensler said. “We’ve already seen companies, both incumbents and startups, relying on base or foundation AI models and building applications on top of them. …Once again, this raises a host of issues that are not new to AI but may be accentuated by it,” and in this case involve privacy and intellectual property concerns.”

The SEC’s role. For the SEC, the challenge here is to promote competitive, efficient markets in the face of what could be dominant base layers at the center of the capital markets, Gensler said. “I believe we closely have to assess this so that we can continue to promote competition, transparency, and fair access to markets.”

Thursday, March 30, 2023

CFTC’s Mersinger says blockchain too often conflated with trading of cryptoassets

By Suzanne Cosgrove

CFTC Commissioner Summer K. Mersinger last week addressed the absence of women in the high-tech arena and laid out what she sees as the CFTC’s role in the development and regulation of blockchain and the derivatives markets that the agency regulates.

In her speech, delivered to the International Women of Blockchain 2023 Web3 and Metaverse Conference, Mersinger said the CFTC is a “technology neutral regulator.” And in terms of the still-evident gender gap in male-dominated high-tech fields, including blockchain, ”Clearly, there is an opportunity to do better,” she told the group.

Mersinger said the CFTC’s job is to ensure existing and emerging technologies can compete on a level playing field. “Our governing statute, the Commodity Exchange Act, specifically identifies one of its purposes as being to promote responsible innovation and fair competition.”

Innovation a focus. ”In light of the opportunities that an innovative and groundbreaking technology like blockchain presents for the derivatives markets we regulate, my focus is on assuring that we at the CFTC take that mission seriously,” Mersinger said.

“Whether you embrace or dismiss the utility of digital assets, it is hard to argue against the benefits of blockchain technology,” she said. “These benefits go far beyond cryptoassets, and regardless of whether or not you become a crypto adopter, I believe that the underlying technology could have a positive impact on society.”

What’s next. “We need to understand and appreciate the difference between the technology and the assets,” Mersinger said. “Blockchain can be independent of cryptoassets.” In addition, “we need to work together and create a framework for how we will consider digital assets and the underlying blockchain technology under our current federal financial regulatory regime.

“Without some sort of legal framework and clarity, I fear we will be left with regulation through enforcement—which is what we are seeing at this time, “the commissioner said.

Global significance. “We cannot forget that blockchain technology has the potential for global benefits,” she added. “The derivatives markets that we regulate at the CFTC are similarly global markets, and the CFTC historically has been a leader in helping these markets thrive through the development of regulations that are harmonized between countries.”

Mersinger delivered her remarks to the blockchain group last week, before Monday’s CFTC action charging Changpeng Zhao, the co-founder and CEO of Binance, and three entities that operate the Binance platform, with violations of the Commodity Exchange Act and CFTC regulations.

Thursday, May 19, 2022

Dangling digital gummy bears: Gensler considers behavior nudges and investor temptations

By John M. Jascob, J.D., LL.M.

The use of predictive data analytics and other digital engagement practices is rapidly transforming the way brokers and advisers engage with investors, but these electronic “behavioral nudges” raise important issues about the nature of investment advice and fairness in the financial markets, according to SEC Chairman Gary Gensler. While the industry’s use of artificial intelligence and machine learning to create individually tailored behavioral prompts can be powerful and profitable tools, Gensler questioned whether these tools are optimizing the digital experience for the benefit of investors or simply prioritizing platforms’ revenue and performance. Gensler delivered his remarks virtually to an audience of state securities regulators and industry professionals at the 2022 NASAA Public Policy Symposium in Washington. D.C.

Gensler drew a comparison with the way grocery stores tap into behavioral psychology to activate shoppers’ impulses to purchase items they may not need by locating fruits and vegetables in the outer aisles but placing gummy bears and other candy and snack foods near the cash register. Gensler cited research showing that impulsive purchases account for 62 percent of supermarket sales. Although grocery stores serve the public, many consumers recognize that stores also have a profit incentive and thus may tempt us with products that serve their interests rather than ours while employing the latest technology and research to do so.

The expectation changes, however, when consumers seek advice from their investment professionals. The world of finance is different, Gensler observed, because investment professionals are dealing with other people’s money. Accordingly, brokers and advisers have legal duties to comply with standards on care, loyalty, best interest, and best execution. “You can’t dangle gummy bears over an investor’s shopping cart, so to speak—even if the latest technologies might make it all the more easy, subtle, and profitable to do so,” Gensler said.

Gensler noted, however, that robo-advisers, brokerage apps, and wealth management apps are increasingly using digital engagement practices to narrowly target each investor with specific marketing, pricing, and nudges. Returning to his grocery store analogy, Gensler compared this to a store being able to rearrange its inventory, shelving, and pricing for each shopper every time they visited that store, down to the placement of impulse items by the register. Although this scenario might not be fully realized yet, it raises questions about what constitutes investment advice or recommendations in the new digital world of finance. For example, does a behavioral nudge like a flashing “options trading” button when opening a brokerage account constitute a recommendation? Moreover, Gensler asked, when do behavioral nudges take on attributes similar enough to advice or recommendations such that related investor protections are needed?

The securities industry's use of digital engagement practices also presents related issues of bias and fair access and prices in the financial markets, Gensler said. The SEC chairman suggested that the underlying data used in the analytic models could be based upon data that reflects historical biases, along with underlying features that may be proxies for characteristics like race and gender. As investment platforms rely on increasingly sophisticated data analytics, Gensler said that it will be appropriate to safeguard against fortifying such biases algorithmically. He added that the new forms of predictive data analytics raise issues for financial stability through herding, interconnectedness, and possible greater concentration in the capital markets. Accordingly, Gensler has asked the SEC staff to examine how to improve efficiency and competition throughout the financial markets.

Following the SEC chairman’s remarks. NASAA President and Maryland Securities Director Melanie Senter Lubin asked Gensler what can be done to prevent firms’ use of digital engagement practices from leading to overly aggressive and risky trading. Gensler responded that the answer lies truly in firms placing the interests of investors first. Gensler also said that regulators should be technologically neutral but not technologically naive. Noting the dramatic growth in robo-advisers, Gensler said that machine learning and artificial intelligence are rapidly changing the landscape. Accordingly, regulators need to be asking what is behind the algorithms. Does optimization steer outcomes for the benefit of investors or merely toward increases in firm revenue?

Monday, February 14, 2022

FINRA annual examinations report discusses Reg. BI, consolidated audit trail, and SPACs

By John Filar Atwood

FINRA published the 2022 report on its examinations and risk monitoring program in which it discusses its efforts on newer SEC rules such as Regulation Best Interest and Form CRS, as well as cybersecurity threats and the proliferation of securities trading through mobile apps. The report, which is intended to serve as a resource for firms on emerging and ongoing issues, also covers five areas not included in previous reports: firm short positions and fails-to-receive in municipal securities, trusted contact persons, funding portals and crowdfunding offerings, disclosure of routing information, and portfolio margin and intraday trading.

The report covers 21 topics in total and identifies the applicable rules for member firm compliance programs, summarizes noteworthy findings from recent examinations, and outlines effective practices that FINRA observed. This year’s report highlights new material in sections that have appeared in previous versions of the report in addition to findings that are particularly relevant for firms in their first year of operation.

Reg. BI and Form CRS. The report states that during Reg. BI’s and Form CRS’ first full calendar year of implementation in 2021, FINRA conducted a more comprehensive review of firms’ processes, practices, and conduct in areas such as establishing and enforcing adequate written supervisory procedures (WSPs), and filing, delivering and tracking accurate Forms CRS.

In its examinations, FINRA found that some firms’ WSPs were not reasonably designed to achieve compliance with Reg. BI and Form CRS. Specifically, they did not identify the individuals responsible for supervising compliance with Reg. BI, and failed to detail how the firm planned to comply with the new rule requirements.

FINRA also identified instances where the firm failed to modify existing procedures to reflect Reg. BI’s requirements by, among other things, not addressing how costs and reasonably available alternatives should be considered when making recommendations and not addressing conflicts that create an incentive for associated persons to place their interest ahead of those of their customers. Other deficiencies mentioned in the report include inadequate staff training, failure to comply with Reg. BI’s care obligation and its conflict of interest obligation, and insufficient disclosures under Reg. BI.

With respect to Form CRS, FINRA found deficient filings that omitted material facts such as the limitations of a firm’s investment services, or inaccurately represented the disciplinary histories of the firm’s personnel. Some firms failed to describe types of compensation and compensation-related conflicts, or incorrectly stated that the firm does not provide recommendations, according to FINRA.

Consolidated audit trail. In its examinations of consolidated audit trail (CAT) reporting, FINRA found that some firms submitted to the Central Repository information that was inaccurate or incomplete. Some firms failed to resolve or repair CAT errors in a timely manner, FINRA said, while others did not establish and maintain WSPs or supervisory controls regarding both CAT reporting and clock synchronization that are performed by third-party vendors.

Mobile apps. FINRA acknowledged that mobile apps can serve to increase market participation, expand the types of products available to investors, and educate them on financial concepts. However, FINRA noted that the apps raise novel questions and potential concerns, such as whether they encourage retail investors to engage in trading activities and strategies that may not be consistent with their investment goals or risk tolerance.

FINRA identified significant problems with some mobile apps’ communications with customers and firms’ supervision of activity on those apps, particularly controls around account openings. It also observed mobile apps making use of social media to acquire customers, so it recently initiated a targeted exam to assess firms’ practices in this area. The exam is considering, among other things, firms’ management of their obligations related to information collected from customers acquired through social media and other individuals who may provide data to firms. FINRA intends to make public the findings of the review after its completion.

SPACs. On the topic of special purpose acquisition companies (SPACs), FINRA said that it recognizes how SPACs can provide companies with access to diverse funding mechanisms and allow investors to access new investment opportunities. As SPAC activity increased, so too did FINRA’s focus on broker-dealers’ compliance with their regulatory obligations in executing SPAC transactions. In October 2021, FINRA launched a targeted exam to explore a range of issues, including how firms manage potential conflicts of interest in SPACs, whether firms are performing adequate due diligence on merger targets, and if firms are providing adequate disclosures to customers. FINRA said that it plans to share its findings from this review at a later date.

For comprehensive coverage of today’s news, go to the Securities Legal Research & News section of VitalLaw.

Thursday, January 20, 2022

IOSCO seeks feedback on digitalization risks, publishes best practices for global cooperation

By Anne Sherry, J.D.

IOSCO is requesting feedback on a consultation report that addresses growth in digitalization and the use of social media to market and distribute financial services and products. The report analyzes these developments and proposes tools relating to firm-level rules, responsibilities, surveillance and supervision, staff qualification, compliance, and clarity around internet domains. Comments are due March 17. Additionally, IOSCO published a set of good practices related to the use of global supervisory colleges in securities markets.

Consultation report. According to the consultation report, digitalization and cross-border offerings, while providing new opportunities for both firms and investors, also carry risks for investors and challenges for regulators. In retail over-the-counter leveraged products, firms have used social media, online marketing, and internet-based trading platforms to reach customers. But while improving investors’ access, digitalization can enable bad actors to hide their identities, target potential victims, and exploit jurisdictional differences. IOSCO is also concerned about the use of gamification to influence investors’ trading behavior.

To help member regulators address these concerns, the report includes two “toolkits.” The proposed policy toolkits encourage members to institute firm-level rules for online marketing and distribution, along with rules for online onboarding. Members should require, subject to applicable laws and regulations, that management assume responsibility for the accuracy of information provided to potential investors on behalf of the firm. IOSCO members themselves should consider whether they have the capacity to surveil and supervise online activities, including on social media. Finally, members should consider requiring firms to assess qualifications for digital marketing staff; do due diligence into third-country regulations in the case of cross-border activity; and adopt policies and procedures for disclosure of the underlying entity offering the product.

The proposed enforcement toolkit includes recommendations on proactive, technology-based detection and investigatory techniques for illegal digital conduct. IOSCO members could consider seeking additional powers to curb online misconduct and increasing cooperation with international counterparts and with criminal authorities. Finally, the toolkit suggests initiatives to foster collaboration with the electronic intermediaries themselves, as well as to address supervisory and regulatory arbitrage.

Good practices. IOSCO also published a set of good practices generated from its report Lessons Learned from the Use of Global Supervisory Colleges. The good practices encourage the use of supervisory colleges to share information and solutions during a crisis. They cover matters such as general purpose, membership, governance, multilateral confidentiality arrangements and the cross-border operations of supervisory colleges.

The report also calls for the use of “core-extended” structures that would allow all relevant authorities, including in emerging jurisdictions, to exchange information about a supervised entity. It highlights market sectors where the use of supervisory colleges could be expanded, taking into account interconnectedness across jurisdictions and emerging areas where supervisory knowledge is not yet fully developed. IOSCO members have suggested using supervisory colleges for market intermediaries, financial benchmarks administrators, crypto-asset platforms, and asset management.

Friday, October 22, 2021

Public policy at forefront as Gensler considers stablecoins, gamification

By Anne Sherry, J.D.

SEC Chair Gary Gensler discussed the challenges the SEC faces in adapting its tripartite mission to emerging technologies such as stablecoins, DeFi, and direct trading platforms. Speaking to Georgetown Law Professor Chris Brummer at DC Fintech Week, Gensler stressed the need for the SEC to protect investors and promote fairness as the financial markets evolve. Gamified investing apps, predictive data analytics, and the concentration of assets into a handful of stablecoins all present issues for regulators.

In some respects, Gensler’s comments suggested that the task of keeping up with new technology is not a new challenge for the SEC. Gensler cited the SEC’s establishment, under the chairmanship of Arthur Levitt, of alternative trading systems in response to the proliferation of trading on online bulletin boards as the Internet became more widely adopted. In Gensler’s view, predictive data analytics could transform finance as significantly as the Internet did in the 1990’s. The central question, Gensler said, is how the SEC can continue to achieve its core public policy goals when new technologies come along and change the face of finance.

Specifically in the areas of artificial intelligence and predictive data analytics, Gensler expressed concerns about the further entrenchment of societal inequities that may be embedded in the underlying data. For example, aggregating consumer data from Fitbits and other devices and using that information to determine who is a good credit risk may embed society’s existing biases. Gensler believes there is not enough public debate around this issue. Under the Fair Credit Reporting Act, people have a right to know why they are denied credit, and he suggested regulators should consider policies to guard against bias and allow for explanations of these decisions.

The chair also sees a risk of conflicts of interest in some of these innovations. Are digital analytics platforms solely optimizing for the benefit of investors, or are they also optimizing for their own revenues? He noted that thirteen years after the bitcoin white paper, no one knows who Satoshi Nakamoto is. There is still an “off-the-grid, cryptographers’ approach” embedded in some of this innovation, he said.

Brummer observed that the rise of retail investing has brought new entrants to the capital markets, particular people of color and younger investors, and asked Gensler how he balances the risks and opportunities of these platforms. Gensler responded that, while investors are empowered to choose their risks, some of the digital engagement practices may steer users towards certain options via behavioral prompts or gamification. These apps are still subject to concepts of best interest, best execution, and the fiduciary duties of care and loyalty, he added. Two of the SEC’s three core objectives are investor protection and ensuring fair markets, and either way, financial inclusion is part of the agency’s remit.

Another issue for regulators as a whole is financial stability and systemic risk. While seeming to evade Brummer’s question about what role the SEC will have in regulating stablecoins, particularly after the CFTC’s large fine against Tether, Gensler highlighted some issues about the tokens more broadly. Stablecoins are a $130 billion asset category that is primarily concentrated on three platforms, raising questions of systemic risk. Gensler said that stablecoins serve not just to facilitate crypto trading but also to avoid fiat banking systems and the regulations that those entail, including anti-money-laundering and tax laws. All financial regulators are assessing stablecoins from the perspective of whether they are going to be like banks, what form their reserves take, and whether they fall within the AML rules, the chair said.