WILL KNIGHT
SIX MONTHS AGO this week, many prominent AI researchers, engineers, and entrepreneurs signed an open letter calling for a six-month pause on development of AI systems more capable than OpenAI’s latest GPT-4 language generator. It argued that AI is advancing so quickly and unpredictably that it could eliminate countless jobs, flood us with disinformation, and—as a wave of panicky headlines reported—destroy humanity.
This is an edition of WIRED's Fast Forward newsletter, a weekly dispatch from the future by Will Knight, exploring AI advances and other technology set to change our lives.
As you may have noticed, the letter did not result in a pause in AI development, or even a slow down to a more measured pace. Companies have instead accelerated their efforts to build more advanced AI.
Elon Musk, one of the most prominent signatories, didn’t wait long to ignore his own call for a slowdown. In July he announced xAI, a new company he said would seek to go beyond existing AI and compete with OpenAI, Google, and Microsoft. And many Google employees who also signed the open letter have stuck with their company as it prepares to release an AI model called Gemini, which boasts broader capabilities than OpenAI’s GPT-4.
WIRED reached out to more than a dozen signatories of the letter to ask what effect they think it had and whether their alarm about AI has deepened or faded in the past six months. None who responded seemed to have expected AI research to really grind to a halt.
“I never thought that companies were voluntarily going to pause,” says Max Tegmark, an astrophysicist at MIT who leads the Future of Life Institute, the organization behind the letter—an admission that some might argue makes the whole project look cynical. Tegmark says his main goal was not to pause AI but to legitimize conversation about the dangers of the technology, up to and including the fact that it might turn on humanity. The result “exceeded my expectations,” he says.
The responses to my follow-up also show the huge diversity of concerns experts have about AI—and that many signers aren’t actually obsessed with existential risk.
Lars Kotthoff, an associate professor at the University of Wyoming, says he wouldn’t sign the same letter today because many who called for a pause are still working to advance AI. “I’m open to signing letters that go in a similar direction, but not exactly like this one,” Kotthoff says. He adds that what concerns him most today is the prospect of a “societal backlash against AI developments, which might precipitate another AI winter” by quashing research funding and making people spurn AI products and tools.
“In the age of the internet and Trump, I can more easily see how AI can lead to destruction of human civilization by distorting information and corrupting knowledge,” says Richard Kiehl, a professor working on microelectronics at Arizona State University.
“Are we going to get Skynet that’s going to hack into all these military servers and launch nukes all over the planet? I really don’t think so,” says Stephen Mander, a PhD student working on AI at Lancaster University in the UK. He does see widespread job displacement looming, however, and calls it an “existential risk” to social stability. But he also worries that the letter may have spurred more people to experiment with AI and acknowledges that he didn’t act on the letter’s call to slow down. “Having signed the letter, what have I done for the last year or so? I’ve been doing AI research,” he says.
Despite the letter’s failure to trigger a widespread pause, it did help propel the idea that AI could snuff out humanity into a mainstream topic of discussion. It was followed by a public statement signed by the leaders of OpenAI and Google’s DeepMind AI division that compared the existential risk posed by AI to that of nuclear weapons and pandemics. Next month, the British government will host an international “AI safety” conference, where leaders from numerous countries will discuss possible harms AI could cause, including existential threats.
Perhaps AI doomers hijacked the narrative with the pause letter, but the unease around the recent, rapid progress in AI is real enough—and understandable. A few weeks before the letter was written, OpenAI had released GPT-4, a large language model that gave ChatGPT new power to answer questions and caught AI researchers by surprise. As the potential of GPT-4 and other language models has become more apparent, surveys suggest that the public is becoming more worried than excited about AI technology. The obvious ways these tools could be misused is spurring regulators around the world into action.
The letter’s demand for a six-month moratorium on AI development may have created the impression that its signatories expected bad things to happen soon. But for many of them, a key theme seems to be uncertainty—around how capable AI actually is, how rapidly things may change, and how the technology is being developed.
“Many AI skeptics want to hear a concrete doom scenario, but to me, the fact that it is difficult to imagine detailed, concrete scenarios is kind of the point—it shows how hard it is for even world-class AI experts to predict the future of AI and how it will impact a complex world” says Scott Niekum, a professor at the University of Massachusetts Amherst who works on AI risk and signed the letter. “And when you combine that prediction difficulty with lagging progress in safety, interpretability, and regulation, I think that should raise some alarms.”
Uncertainty is hardly proof that humanity is in danger. But the fact that so many people working in AI still seem unsettled may be reason enough for the companies developing AI to take a more thoughtful—or slower—approach.
“Many people who would be in a great position to take advantage of further progress would now instead prefer to see a pause,” says signee Vincent Conitzer, a professor who works on AI at CMU. “If nothing else, that should be a signal that something very unusual is up.”
No comments:
Post a Comment