Michael Frank
It is uncontroversial that the extinction of humanity is worth taking seriously. Perhaps that is why hundreds of (artificial intelligence) AI researchers and thought leaders signed on to the following statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The Statement on AI Risk and the collective gravitas of its signatories has demanded the attention of leaders around the world for regulating AI—in particular, generative AI systems like OpenAI’s ChatGPT. The most advanced AI regulatory effort is the European Union, whose parliament recently passed its version of the Artificial Intelligence Act (AI Act). The AI Act’s proponents have suggested that rather than extinction, discrimination is the greater threat. To that end, the AI Act is primarily an exercise in risk classification, through which European policymakers are judging applications of AI as high-, limited-, or minimal-risk, while also banning certain applications they deem unacceptable, such as cognitive behavioral manipulation; social scoring based on behavior, socioeconomic status or personal characteristics; and real-time biometric identification from law enforcement. The AI Act also includes regulatory oversight of “high-risk” applications like biometric identification in the private sector and management of critical infrastructure, while also providing oversight on relevant education and vocational training. It is a comprehensive package, which is also its main weakness: classifying risk through cross-sectoral legislation will do little to address existential risk or AI catastrophes while also limiting the ability to harness the benefits of AI, which have the potential to be equally astonishing. What is needed is an alternative regulatory approach that addresses the big risks without sacrificing those benefits.
Given the rapidly changing state of the technology and the nascent but extremely promising AI opportunity, policymakers should embrace a regulatory structure that balances innovation and opportunity with risk. While the European Union does not neglect innovation entirely, the risk-focused approach of the AI Act is incomplete. By contrast, the U.S. Congress appears headed toward such a balance. On June 21, Senate majority leader Chuck Schumer gave a speech at CSIS in which he announced his SAFE Innovation Framework for AI. In introducing the framework, he stated that “innovation must be our North Star,” indicating that while new AI regulation is almost certainly coming, Schumer and his bipartisan group of senators are committed to preserving innovation. In announcing the SAFE Innovation Framework, he identified four goals (paraphrased below) that forthcoming AI legislation should achieve:Security: instilling guardrails to protect the U.S. against bad actors’ use of AI, while also preserving American economic security by preparing for, managing, and mitigating workforce disruption.
Accountability: promoting ethical practices that protect children, vulnerable populations, and intellectual property owners.
Democratic Foundations: programming algorithms that align with the values of human liberty, civil rights, and justice.
Explainability: transcending the black box problem by developing systems that explain how AI systems make decisions and reach conclusions.
Congress has an important role to play in addressing AI’s risks and empowering federal agencies to issue new rules and apply existing regulations where appropriate. Sending a message to the public—and to the world—that the U.S. government is focused on preventing AI catastrophes will inspire the confidence and trust necessary for further technological advancement.
AI is evolving rapidly. Regulators need to develop a framework that addresses risks as they evolve, too, while also fostering potentially transformative benefits. The implication is not that policymakers embrace unregulated AI. Undoubtedly, there should be guardrails on AI. As Schumer and his colleagues pursue their four goals, they should design regulation with four principles in mind: (1) preventing the establishment of anticompetitive regulatory moats for established companies; (2) focusing on resolving obvious gaps in existing law in ways that assuage concerns about existential risk from AI; 3) ensuring society can reap the benefits of AI; and 4) advancing “quick wins” in sector-specific regulation.
Preventing the establishment of anti-competitive regulatory moats for established companies.
Regulatory solutions should not preclude the development of a competitive AI ecosystem with many players. DeepMind and OpenAI, two of the leading AI companies, are 12 and 7 years old, respectively. They have an edge over the competition today because of the quality of their work. If they retain that competitive position 20 years from now, it should be because of their superior ability to deliver safe and transformative AI, not because regulations have created entrenched monopolies. Entrepreneurship remains at the heart of innovation. Many of the most transformative AI companies in this new era may not yet exist. Today’s technology titans like Facebook, Google, and Netflix were founded decades after the predecessor of the modern internet, and years after the 1993 launch of the World Wide Web in the public domain. The Federal Trade Commission (FTC) could clarify guidance on what would constitute anti-competitive mergers and acquisitions of AI Companies. An overtly pro-competitive stance from the FTC would help to encourage broad innovation and economic growth.
Focusing on resolving obvious gaps in existing law in ways that assuage concerns about existential risk from AI.
It may seem counterintuitive, but starting with the biggest questions around existential risk is the right way to support the development of trustworthy AI. There are three reasons. First, the reality of existential risk has captivated policymakers worldwide and on both sides of the U.S. political aisle. Schumer’s working group includes Republican senators Todd Young and Mike Rounds along with Democratic Senator Martin Heinrich. After the May 16 hearing at the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, Darrell M. West from the Brookings Institution noted, “More surprising were the oftentimes bipartisan calls for tougher regulation and greater disclosure of AI utilization. . . . most of the lawmakers agreed there need to be stronger guardrails that safeguard basic human values.” This is a unique political issue that can attract bipartisan support, but that support will erode if policymakers start with issues that lack similar consensus. Second, existential risk, even if small, deserves attention. For the same reason, NASA has spent $150 million per year in recent years to locate and track large asteroids that could threaten Earth. Nobody wants humanity to go the way of the dinosaurs. Third, this approach helps to prevent over-regulation, starting with addressing the most egregious harms and tailoring the approach where needed. The law is just one aspect of regulation. Norms, markets, and architecture also constrain behavior. As bottom-up complements to legislation and executive rulemaking, they can be equally effective at regulating the development and deployment of AI.
There is a dearth of mature ideas for limiting extinction risk, but that is not a reason to neglect the issue. Majority Leader Schumer’s proposed AI Insight fora, a series of conversations between Congress and thought leaders in the administration, civil society, and the private sector, should focus on developing specific technical safeguards and management process safeguards to mitigate existential risk. For example, Microsoft’s decision to limit the number of inquiries in one conversation with Bing—a decision taken in response to evidence that Bing’s oddest outputs generally came toward the end of long conversations—could inform the development of a similar technical requirement for all companies. The National Institute of Standards and Technology’s AI Risk Management Framework, a nonbinding policy document that, among other best practices, recommends organizational structures to reinforce accountability. Congress could require AI companies above a certain size or risk threshold to appoint a Chief AI safety officer. Personnel involved in managing cutting-edge artificial general intelligence (AGI) research at major corporate or government labs could receive certifications for AI safety, similar to the AS9100 standard for safety at aerospace manufacturers or Biosafety Level (BSL) designations for research with infectious pathogens.
Ensuring society can reap the benefits of AI.
While policymakers should take existential risk seriously, they should not fail to also focus on ensuring that society can reap AI’s enormous potential benefits. It is a delicate balance. Obsession with risk at the expense of innovation is just as important to avoid as failing to regulate. It is easy to forget—amidst the cacophony of doomsayers—that AI can transform government, the economy, and society for the better. From education to healthcare to scientific discovery to advancing the interests of the free world, there is virtually no aspect of modern life that AI cannot improve in some way. AI is already helping to advance research that could yield the solution to nuclear fusion or identify new drugs to treat rare and deadly diseases. The cost of implementing the wrong regulation is sacrificing some of those benefits before we have had a chance to fully appreciate their potential.
Advancing “quick wins” in sector-specific regulation.
The choice is not between banning AI or doing nothing. There is a reasonable alternative that builds upon good work taking place in U.S. federal agencies. While Schumer and his colleagues consider Congress’ role in establishing new guardrails over the next several months, the administration can simultaneously move ahead to exercise existing regulatory authorities in the context of AI. Laws that protect against injury—accidental or intentional, from humans or technology—like the Children’s Online Privacy Protection Act, the Securities Exchange Act, and the Federal Aviation Act, do not evaporate just because an AI algorithm is involved.
Federal agencies should develop sector-specific AI regulation and improve their guidance on the ways in which existing regulation applies. In fact, they should have already done so. In February 2019, President Trump issued Executive Order 13859, titled Maintaining American Leadership in Artificial Intelligence. The order directed federal agencies to develop AI regulatory plans. That work is incomplete. As of December 2022, only 5 of the 41 major agencies had fulfilled that order. Only one—the Department of Health and Human Services (HHS)—put forth a thorough plan, which Alex Engler at Brookings describes as “extensively documented [documenting] the agency’s authority over AI systems. . . . The thoroughness of the HHS’s regulatory plan shows how valuable this endeavor could be for federal agency planning and informing the public if other agencies were to follow in HHS’s footsteps.” Congress should compel this process with clear timelines and broad goals each agency should seek to address. If the problem is that federal agencies lack a pool of experts who have a clear understanding of how to regulate AI in their subject matter domain, Congress should take measures to strengthen the workforce and expand regulatory capacity as appropriate.
For its part, there are some ”quick wins” that the executive branch could achieve on AI regulation. For example:The Federal Elections Commission could issue an advisory opinion to require disclosures of generative AI content in campaign advertising.
The Consumer Product Safety Commission could compel LLMs of a certain size—based on user reach, financial backing, training data scale, model sophistication, or all of the above—to issue certain disclosures that would better inform consumers of the risks of interacting with an LLM and the model’s limitations. (The EU AI Act sensibly requires generative AI models to disclose that they are not human).
The Department of Commerce could expand existing AI software export controls to explicitly ban entities from countries like China, Russia, Iran, and North Korea from accessing U.S.-based LLMs or open-source AI models and data based in the United States.
Finally, policymakers should remember that they need not produce the kind of exhaustive, proactive risk remedies that constitute the EU AI Act, as the U.S. judiciary will have a role to play in resolving complex or marginal cases that result from regulatory contradictions or gaps. For example, it is reasonable to assume that if an algorithm hacks into a computer, a legal person—be it the algorithm developer, the natural person instructing the algorithm, or both—would be liable under the Computer Fraud and Abuse Act. Legislators do not have to decide now who specifically should bear liability. Instead, the judiciary can establish a legal precedent through discovery, trial, and appeal.
Ultimately, policymakers have a responsibility to balance their dual roles as guarantors of trust through regulation and guardians of the innovation environment for responsible AI. While there can be tension between those roles, they are not impossible to align. The payoff is an AI ecosystem captures U.S. strengths in researching and capitalizing emergent technology without significant sacrifices on safety. Putting up obvious guardrails will help communicate to the public that the government is paying attention to risk while also avoiding regulatory capture or stifling innovation. The U.S. government should resist the false choice between “doing everything” and “doing nothing” and instead seek to define a world-leading framework that balances risks and rewards.
Michael Frank is a senior fellow in the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies in Washington, D.C.
No comments:
Post a Comment