Last week, the EU took steps to further flesh out their emerging digital strategy with the release of two white papers, one on AI regulation, and the other on a European data strategy. Together, the documents show the tension that exists in European digital policymaking. The EU is alarmed by the potential harms of technologies like AI, and believes that the precautionary principle is the only way to manage these risks. But at the same time, the EU is anxious to capitalize on the opportunities presented by the data economy, and fears becoming little more than a pawn in the digital geopolitics playing out between the U.S. and China. Whether and how the EU will be able to resolve this tension is yet to be seen, though these documents offer our clearest picture yet of how the EU plans to try.
The first white paper, on promoting “excellence and trust” in artificial intelligence, has so far been largely notable for what it does not include, namely, the five-year ban on the use of facial recognition in public spaces that had been part of an earlier draft. In place of a moratorium, yesterday’s white paper instead calls only for a “broad European debate” on the issue, frustrating many digital rights activists who wished for a stronger statement against the technology. However, the public focus on the debate over facial recognition moratoriums risks distracting from the broader issues raised by the document’s framing of AI’s regulatory challenges.
The most important of these issues is the EU’s creation of the two distinct regulatory buckets for high- and low-risk AI applications. Under the rules set out in the white paper, high-risk AI would face heightened regulatory requirements around the training data they use, how they retain records and data, what information they provide to users, the accuracy and robustness of the systems, and the level of human oversight baked into the process. Most of these requirements would have to be checked through testing before the AI systems could be deployed. In contrast, low-risk AI system are exempted from these requirements, and instead will merely be encouraged to participate in a voluntary labeling scheme that would award developers a quality label for meeting certain EU-wide standards and benchmarks.
This bifurcation of AI applications into high- and low-risk may seem appealing to policymakers seeking a straightforward approach to AI regulation, but it fails to capture the sliding scale of AI risk. The only options are “no regulation” or “heavy regulation,” and it is virtually guaranteed that a large number of moderately risky AI systems will end up falling into the high-risk bucket and being subjected to onerous and disproportionate requirements by regulators. If the developers of moderate-risk systems end up struggling to innovate under heavy legal requirements, or eschewing the EU market altogether, it would not only deny European citizens the benefits of those AI technologies, but also set a damaging global precedent for how technology governance can be accomplished. And that would be unfortunate, because the EU’s white paper otherwise serves as a promising template for thinking about ways to address AI risk through regulatory action.
Many AI deployments will be risky. AI systems are already making decisions about what treatments we get in hospitals, whether we hare hired on for a new job, whether the police target us as the suspect in a crime, and whether it’s safe to take that next left at the intersection. These systems will need more than voluntary standards to manage the risks of something going wrong. The EU is not the only one to realize this, and soon countries around the world will begin looking for guidance on how they, too, can begin taking steps to address concerns about AI. If the EU’s approach is the only game in town when this realization hits, then we will be left facing the same result as with the GDPR—the global uptake of a regulatory regime due to the lack of any viable alternative.
The U.S. was never able to provide any compelling alternative to GDPR for privacy protection aside from laissez faire. The appetite for this approach outside the U.S. was exactly zero, and the U.S. now finds itself preparing to make the same mistake in AI. The Trump Administration’s recent memo on regulating AI proposes only vague principles for regulation and deference to industry preferences. Unless the U.S. is able to articulate a more powerful vision for AI risk management than this, we may once again find ourselves responding to the global adoption of European rules rather than setting an affirmative vision of our own for technology governance.
In particular, the U.S. has an opportunity to describe a model for risk-based regulation that better captures the spectrum of potential AI benefits and harms. The U.S. can also use the formulation of a more robust regulatory strategy as an opportunity to emphasize the importance of sectoral-specific work rather than the politically appealing but technically nonsensical urge to regulate AI as a standalone technology. The types of legal requirements the EU’s white paper proposes could be used as a template for a U.S. strategy, but policymakers should view their work as an opportunity to pre-empt potentially misguided interpretations. For instance, legal requirements surrounding accuracy and robustness should not lead to a regulatory regime requiring source code disclosure—a practice that is far less useful for AI risk management than policymakers tend to think, and which could complicate regulatory efforts due to pushback from firms concerned about the protection of their trade secrets. Instead, the U.S. can emphasize the potential use of alternative requirements like contextual algorithmic audits and training data review which do a better job of addressing risks while also getting tech firms onboard.
The EU’s emerging regulatory approach to AI is a promising start, but could set a concerning global precedent if allowed to move forward unchallenged. The world needs a strong example of what technology governance should look like in the age of big data and artificial intelligence, but that governance structure must be capable of dynamically balancing benefits and risks as new technologies emerge. Algorithms used to screen job applicants may be sexist today, but in the future, improved algorithms will free us from misogynistic hiring managers. That is what regulation should be aiming for: an approach that can manage risks today while supporting the work that will eliminate those risks tomorrow. The U.S. has an opportunity to build towards this future, but it will first have to learn from the EU that shaping the future requires action.
Click here to see part two of this series, on the European data strategy [forthcoming]
No comments:
Post a Comment