KHARI JOHNSON
CHATGPT HAS STOKED new hopes about the potential of artificial intelligence—but also new fears. Today the White House joined the chorus of concern, announcing it will support a mass hacking exercise at the Defcon security conference this summer to probe generative AI systems from companies including Google.
The White House Office of Science and Technology Policy also said that $140 million will be diverted toward launching seven new National AI Research Institutes focused on developing ethical, transformative AI for the public good, bringing the total number to 25 nationwide.
The announcement came hours before a meeting on the opportunities and risks presented by AI between US vice president Kamala Harris and executives from Google and Microsoft, as well as the startups Anthropic and OpenAI, which created ChatGPT.
The White House AI intervention comes as appetite for regulating the technology is growing around the world, fueled by the hype and investment sparked by ChatGPT. In the parliament of the European Union, lawmakers are negotiating final updates to a sweeping AI Act that will restrict and even ban some uses of AI, including adding coverage of generative AI. Brazilian lawmakers are also considering regulation geared toward protecting human rights in the age of AI. Draft generative AI regulation was announced by China’s government last month.
In Washington, DC, last week, Democrat senator Michael Bennett introduced a bill that would create an AI task force focused on protecting citizens' privacy and civil rights. Also last week, four US regulatory agencies including the Federal Trade Commission and Department of Justice jointly pledged to use existing laws to protect the rights of American citizens in the age of AI. This week, the office of Democrat senator Ron Wyden confirmed plans to try again to pass a law called the Algorithmic Accountability Act, which would require companies to assess their algorithms and disclose when an automated system is in use.
Arati Prabhakar, director of the White House Office of Science and Technology Policy, said in March at an event hosted by Axios that government scrutiny of AI was necessary if the technology was to be beneficial. “If we are going to seize these opportunities we have to start by wrestling with the risks,” Prabhakar said.
The White House–supported hacking exercise designed to expose weaknesses in generative AI systems will take place this summer at the Defcon security conference. Thousands of participants, including hackers and policy experts, will be asked to explore how generative models from companies including Google, Nvidia, and Stability AI align with the Biden administration’s AI Bill of Rights announced in 2022 and a National Institute of Standards and Technology risk management framework released earlier this year.
Points will be awarded under a capture-the-flag format to encourage participants to test for a wide range of bugs or unsavory behavior from the AI systems. The event will be carried out in consultation with Microsoft, nonprofit SeedAI, the AI Vulnerability Database, and Humane Intelligence, a nonprofit created by data and social scientist Rumman Chowdhury. She previously led a group at Twitter working on ethics and machine learning, and hosted a bias bounty that uncovered bias in the social network’s automatic photo cropping.
The AI Now Institute, a nonprofit that has advised lawmakers and federal agencies on AI regulation, argued in a report released last month that because systems ChatGPT can be fine-tuned for a range of uses, they deserve more regulatory scrutiny than previous forms of AI.
Sarah Myers West, managing director of the AI Now Institute and a coauthor of that report, says the renewed interest in AI by federal regulators is welcome. But she says it remains to be seen how meaningful their actions will be. “We just can’t afford to confuse the right noises for enforceable regulation right now,” West says.
She is also wary of how tech companies seeking profits with AI appear to be closely involved with the White House’s new attention to the technology. “We would be remiss to take an approach that leaves it to them to lead the conversation on what constitutes trustworthy and responsible innovation,” she says. “It’s for regulators and the broader public to define what responsible development of technology looks like.”
At a briefing yesterday, a White House official said that companies developing AI should be partners in ensuring the technology is used responsibly, adding that businesses also have a responsibility to make sure products are safe before they’re deployed in public.
Beyond companies developing AI for profit, federal agencies have some work to do on their own use of AI. A December 2022 study from Stanford University found that virtually no federal agencies responded to a Trump-era executive order to provide AI plans to the public and only around half have shared an inventory of how they use AI. The White House Office of Management and Budget will release new guidelines for federal agency use of AI in the coming months.
No comments:
Post a Comment