Hal Brands
J. Robert Oppenheimer was once a household name in America, thanks to his work on the atomic weapons that helped save humanity in World War II and have terrified it ever since. His reputation as a brilliant scientist who was tormented by the dilemmas of the nuclear age is now enjoying a Hollywood-inspired renaissance — and his story has lessons for how democratic societies should deal with another transformative technology: artificial intelligence.
Unfortunately, we’re at risk of getting those lessons wrong.
Oppenheimer and his colleagues built the atomic bomb because almost nothing could have been worse than the Nazis winning World War II. By 1950, however, Oppenheimer opposed building the hydrogen bomb — which was orders of magnitude more powerful than the earliest atomic bombs — because he believed the tools of the Cold War had become more dangerous than those of America’s enemy. “If one is honest,” he predicted, “the most probable view of the future is that of war, exploding atomic bombs, death, and the end of most freedom.”
Oppenheimer lost the H-bomb debate, which eventually led to his loyalty being questioned and his security clearance being revoked. That coda aside, the parallels are obvious today.
Rapid-fire innovation in AI is ushering in another technological revolution. Once again, leading scientists, engineers and innovators argue it is simply too dangerous to unleash this technology on a rivalrous world. In March, prominent researchers and technologists called for a moratorium on AI development. In May, hundreds of experts wrote that AI poses a “risk of extinction” comparable to that of nuclear war. Geoffrey Hinton, a man as prominent in AI as Oppenheimer was in theoretical physics, resigned from his post at Google to warn of the “existential risk” ahead.
The sentiment is understandable. Leaving aside the prospect of killer robots, AI — like most technologies — will change the world for good (better health care) and for ill (more disinformation, helping terrorists build chemical weapons). Yet the solutions that Oppenheimer offered in his day, and that some of his successors offer today, aren’t really solutions at all.
Oppenheimer was right that thermonuclear arms were awful, civilization-shattering weapons. He was wrong that the answer was simply not to build them. We now know that Stalin’s Soviet Union had already decided to create its own hydrogen bomb at the time Washington was debating the issue in 1950. Had the US offered to forego the development of that weapon, Soviet scientist Andrei Sakharov later acknowledged, Stalin would have moved to “exploit the adversary’s folly at the earliest opportunity.”
A world in which the Soviets had the most advanced thermonuclear weapons would not have been better or safer. Moscow would have possessed powerful leverage for geopolitical blackmail — which is just what Stalin’s successor, Nikita Khrushchev, did in the late 1950s when it seemed that the Soviets had surged ahead in long-range missiles.
The US government did eventually take Oppenheimer’s advice, in a limited way: It negotiated arms control agreements that restricted the number and types of nuclear weapons the superpowers possessed, and the ways in which countries could test them. Yet the US was most successful in securing mutual restraint once it had shown it would deny the Soviet Union a unilateral advantage.
Now the US is at the beginning of another long debate in which issues of national advantage are mingled with concern for the common good. It is entirely possible the world will ultimately need some multilateral regime to control AI’s underlying technology or most dangerous applications. US officials are even quietly hopeful that Moscow and Beijing will be willing to regulate technologies that could disrupt their societies as profoundly as it tests the democracies.
Between now and then, though, the US surely does not want to find itself in a position of weakness because the People’s Liberation Army has mastered the next revolution in military affairs, or because China and Russia are making the most of AI — to better control their populations and more effectively diffuse their influence globally, perhaps — and the democracies aren’t.
“AI technologies will be a source of enormous power for the companies and countries that harness them,” reads a report issued in 2021 by a panel led by former Google CEO Eric Schmidt and former Deputy Secretary of Defense Robert Work. As during the nuclear age, democracies must first address the danger that their rivals will asymmetrically exploit new technologies before they address the common dangers those technologies pose.
So understood the president who decided to build the hydrogen bomb seven decades ago. “No one wants to use it,” Harry Truman remarked. “But … we have got to have it if only for bargaining purposes with the Russians.” In the current era of technological dynamism and intense global rivalry, America needs new Oppenheimer — but it probably needs new Trumans more.
Elsewhere in Bloomberg Opinion:
- The Oppenheimer Debate Is Different in Japan: Gearoid Reidy
- ‘Oppenheimer’ and the New Age of Nuclear Terror: Max Hastings
- What the AI ‘Extinction’ Warning Gets Wrong: Tyler Cowen
No comments:
Post a Comment