Luke Hogg
Last November, the research nonprofit OpenAI unleashed ChatGPT, its artificial intelligence (AI) powered chatbot, on the world. Mere months before, conversations about AI were relegated to academic conferences and science fiction conventions. But, as ChatGPT exploded to become the fastest-growing consumer application in history, AI rapidly became a kitchen table issue. Now, policymakers are shining a spotlight on the industry and asking the question: how much regulation is necessary to mitigate potential risks without stifling innovation?
From government reports to briefings and hearings to legislation, AI is the topic du jour on Capitol Hill as lawmakers attempt to answer this question. While legislative proposals regarding AI vary widely, the ethos behind such proposals can generally be grouped into two categories. The first consists of proposals aimed primarily at mitigating potential risks of AI, which typically take a more heavy-handed approach to regulation in the name of consumer protection. The second takes a broader view of the AI ecosystem, attempting to foster innovation and global competitiveness with a more light-touch regulatory regime.
While both approaches are well-intentioned, the latter focusing on innovation and competitiveness holds greater promise. After all, the United States is not the only country developing AI systems, and amidst the Great Tech Rivalry it is essential that we remain globally competitive in cutting-edge technologies. If Washington is too heavy-handed in regulating AI, it risks becoming an innovation desert, like Europe.
The Heavy Hand…
The heavy-handed approach is typified by Representative Ted Lieu (D-CA). As one of the very few members of Congress holding degrees in computer science, Rep. Lieu has been one of the most vocal lawmakers on the question of AI regulation. Just before introducing the first federal piece of legislation itself largely written by an AI, Rep. Lieu opined in the New York Times:
The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of A.I. far outweigh its benefits…. What we need is a dedicated agency to regulate A.I.
Though Rep. Lieu admits that his proposal has little chance of actually passing through Congress this session—and concedes that the first step toward an AI regulator is a “study and report” approach—Lieu and many of his colleagues are hyperfocused on heading off consumer harm that largely remains theoretical. Such an approach seeks to create a regulatory regime based on what these technologies “could” or “might” do in the future.
This prospective framework is antithetical to rapid innovation. For evidence, we need only look to Europe.
Brussels has a long tradition of onerously regulating technologies in the name of mitigating risks to consumers. Take for instance the European Union’s comprehensive data privacy framework, the General Data Privacy Regulation (GDPR). The GDPR has three primary objectives: protecting consumers with regard to the processing of personal data, protecting the right to the protection of personal data, and ensuring the free movement of personal data within the Union. To differing degrees, the GDPR arguably succeeded at the first two of these goals; the legislation created strong consumer protections around the collection and processing of personal data.
However, the GDPR has mostly failed to achieve the goal of ensuring the free movement of data. This is primarily because data, which flows seamlessly across physical borders, cannot be impeded nearly as easily. Tech platforms and applications have had a difficult time complying with the GDPR, which in turn has restricted the voluntary, free flow of personal information rather than ensured it.
According to one study that examined over 4 million software applications, the implementation of the GDPR “induced the exit of about a third of available apps.” Perhaps even worse, the GDPR has led to a dearth of technological innovation throughout Europe. That same study found that the market entry of new applications halved following the implementation of GDPR.
The European Parliament is now developing legislation that it intended to be “the world’s first comprehensive AI law.” While this proposed EU AI Act is not a one-size-fits-all policy akin to the GDPR and other European tech regulations, it will create strict rules for any system utilizing AI technology. Such strict rules around new applications for AI systems, imposed regardless of concrete, provable harms, are likely to strangle the little commercial innovation around AI that remains in Europe.
…versus the Light Touch
The United States cannot afford to follow in Europe’s footsteps and implement heavy-handed regulations that might hamper innovation for the sake of mitigating unproven harms. With China leading the way in both AI innovation and regulation, we must be considerate in our own approach to both. AI systems certainly present novel and unique risks in practically every aspect of human life. But these new technologies also present novel and unique opportunities that should not be handicapped by a heavy-handed approach driven by moral panic.
As two of my colleagues recently wrote in American Affairs, getting AI regulation right “requires a commonsense approach that can account for both the mind-bending dynamics of cutting edge AI systems, while also right-sizing the risks of AI regulations and AI gone wrong.” While Rep. Ted Lieu and his colleagues in the “sky is falling” camp go too far in the direction of onerous European-like tech regulation, there is another camp that recognizes the importance of a light-touch approach to supporting domestic innovation and global competitiveness.
A prime example of this is the recently introduced legislation from Senators Michael Bennet (D-CO), Mark Warner (D-VA), and Todd Young (R-IN). Based on the American Technology Leadership Act from the last Congress, this revised proposal would establish a new Office of Global Competition Analysis. The purpose of this new office would be to assess America’s global competitiveness in strategic technologies and provide policy recommendations on ways to protect and improve competitiveness. As Sen. Bennet stated, the goal of the legislation is to “lose our competitive edge in strategic technologies like semiconductors, quantum computing, and artificial intelligence to competitors like China.”
This second camp, as typified by Sen. Bennet and his colleagues, is less reactive, more constructive, considers the importance of global competition, and recognizes that caution is necessary to avoid imposing heavy-handed regulations that hinder innovation and hamper a nation’s ability to keep pace with AI advancements. To be clear, these lawmakers are not ignoring the real risks presented by AI systems. Rather, they are putting such risks into a global perspective and making a more well-informed calculus about the proper level of regulation.
Maintaining American Innovation
By fostering an environment that encourages both domestic and global competition around AI technologies, and by providing a regulatory framework that promotes responsible AI use, the United States can maintain its global leadership in this crucial field. By embracing light regulation focused on global competitiveness, policymakers can encourage investment, attract top AI talent, and foster an environment that enables American companies to lead in AI development. By allowing room for experimentation and adaptability, the United States can remain at the forefront of AI innovation, providing economic and societal benefits while maintaining a competitive edge on the global stage.
Luke Hogg is the director of outreach at the Foundation for American Innovation where his work focuses on the intersection of emerging technologies and public policy. He is also an innovation fellow at Young Voices. You can follow him on Twitter at @LEHogg.
No comments:
Post a Comment