Pages

21 June 2024

Taking Further Agency Action on A

Will Dobbs-Allsopp, Reed Shaw, Anna Rodriguez, Todd Phillips, Rachael Klarman, Adam Conner, Nicole Alvarez & Ben Olinsky

In response to the surge of attention, excitement, and fear surrounding AI developments since the release of OpenAI’s ChatGPT in November 2022,1 governments worldwide2 have rushed to address the risks and opportunities of AI.3 In the United States, policymakers have sharply disagreed about the necessity and scope of potential new AI legislation.4 By contrast, stakeholders ranging from government officials and advocates to academics and companies seem to agree that it is essential for policymakers to utilize existing laws to address the risks and opportunities of AI where possible, especially in the absence of congressional action.5

What this means in practice, however, remains murky. What are the statutory authorities and policy levers available to the federal government in the context of AI? And how should policymakers use them? To date, there has been no comprehensive survey to map the federal government’s existing ability to impose guardrails on the use of AI across the economy. In 2019, the Trump administration issued Executive Order 13859,6 which directed agencies to “review their [regulatory] authorities relevant to applications of AI.”7 Subsequent 2020 OMB guidance further required: “The agency plan must identify any statutory authorities specifically governing agency regulation of AI applications, as well as collections of AI-related information from regulated entities.”8 Unfortunately, it appears the U.S. Department of Health and Human Services (HHS) was the only agency to respond in detail.9

No comments:

Post a Comment