Hadrien Pouget
European parliamentarians are considering additions to the initial Artificial Intelligence (AI) Act’s draft lists of either prohibited or high-risk applications.
The AI Act’s main thrust is to require programmers working on high-risk applications to document, test and take other safety measures. Parliament is poised to rope in more applications than initially proposed, including broad categories such as systems “likely to influence democratic processes like elections,” or “General Purpose AIs” which can be integrated into different applications, such as OpenAI’s ChatGPT.
Big Tech is predictably resistant. The US government has voiced concern. An increase in scope could raise overall transatlantic tensions surrounding tech regulation. So far, Washington has remained relatively silent as Europe clamps down on tech. Through the Trade and Technology Council, Washington and Brussels have focused on collaboration and expressed a desire to harmonize their regulatory approaches to AI.
A broadening of the AI Act could test this cooperative spirit. US companies and regulators were already worried about the AI Act, fearing it was trying to tackle too many applications and would become ineffective and burdensome. US actors are sensitive as the EU’s recent Digital Services Act and Digital Markets Act were perceived as unjustly targeting US companies. In this environment, an increase in the AI Act’s scope could risk losing the neutral appeal it enjoyed until now.
A looming collision must and can be avoided. US critics should recognize that the AI Act is not the “one-size-fits-all” solution it is often imagined to be. While the Act attaches a single set of generic requirements to high-risk AI systems, the requirements could — and should — be adapted to different applications. “Appropriate levels” of accuracy or robustness will be customized to different high-risk contexts. This would allow flexibility in enforcement and potential alignment with international standards — and the US and EU have already identified standards as an area for cooperation. Recent AI Act proposals also offer opt-out mechanisms for companies who do not think their AI systems pose any risks.
US attitudes are shifting in favor of regulation. Senate Majority Leader Chuck Schumer has announced an ambition to put together an AI policy framework as ChatGPT’s popularity has supercharged the discussion. While details remain scarce, Schumer’s statement suggests some unification of requirements for AI systems according to four guardrails: “who,” “where,” “how,” and “protect.” At the same time, the Department of Commerce has launched a request for comments on the development of AI audits and assessments, and the FTC has begun issuing pointed warnings about AI systems.
In this light, Washington and Brussels enjoy an opportunity to work together. The US and EU can partner on elaborating the technical standards and other concrete implementation details that will underpin both of their regulatory approaches. The EU, which is ahead of the US in some of these discussions, should invite and encourage this collaboration. Conversely, US lawmakers and federal agencies should take note of the EU’s approach as they implement their own requirements.
The European Parliament still must approve a text, and negotiations are stretching out. EU lawmakers reached a political agreement on April 27, and a key committee vote is scheduled for May 11. A full parliamentary vote will be in mid-June. Even then, the Parliament’s position will need to be reconciled with the Council of the European Union’s (formed of member states’ relevant ministers). Substantial changes remain on the table.
It is understandable that the EU’s broadening of the AI Act’s scope (or the threat of broadening) makes Americans nervous. The US should come to the EU with targeted suggestions, as its domestic conversation around AI risks matures. A productive dialogue over AI regulation remains possible — and necessary.
Hadrien Pouget is a visiting research analyst in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, He takes a particular interest in the technical and political challenges faced by those setting AI technical standards, which are set to underpin regulation. Previously, he worked as a research assistant at the computer science department at the University of Oxford.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
No comments:
Post a Comment