Pages

9 May 2023

The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools

David E. Sanger

U.S. national security officials are warning about the potential for the new technology to upend war, cyber conflict and — in the most extreme case — the use of nuclear weapons.

In his 40 years at The Times, David Sanger has written extensively on the intersection of geopolitics, nuclear weapons and arms control. He reports from Washington.

When President Biden announced sharp restrictions in October on selling the most advanced computer chips to China, he sold it in part as a way of giving American industry a chance to restore its competitiveness.

But at the Pentagon and the National Security Council, there was a second agenda: arms control.

If the Chinese military cannot get the chips, the theory goes, it may slow its effort to develop weapons driven by artificial intelligence. That would give the White House, and the world, time to figure out some rules for the use of artificial intelligence in sensors, missiles and cyberweapons, and ultimately to guard against some of the nightmares conjured by Hollywood — autonomous killer robots and computers that lock out their human creators.

Now, the fog of fear surrounding the popular ChatGPT chatbot and other generative A.I. software has made the limiting of chips to Beijing look like just a temporary fix. When Mr. Biden dropped by a meeting in the White House on Thursday of technology executives who are struggling with limiting the risks of the technology, his first comment was “what you are doing has enormous potential and enormous danger.”

It was a reflection, his national security aides say, of recent classified briefings about the potential for the new technology to upend war, cyber conflict and — in the most extreme case — decision-making on employing nuclear weapons.

But even as Mr. Biden was issuing his warning, Pentagon officials, speaking at technology forums, said they thought the idea of a six-month pause in developing the next generations of ChatGPT and similar software was a bad idea: The Chinese won’t wait, and neither will the Russians.

“If we stop, guess who’s not going to stop: potential adversaries overseas,” the Pentagon’s chief information officer, John Sherman, said on Wednesday. “We’ve got to keep moving.”

His blunt statement underlined the tension felt throughout the defense community today. No one really knows what these new technologies are capable of when it comes to developing and controlling weapons, and they have no idea what kind of arms control regime, if any, might work.

The foreboding is vague, but deeply worrisome. Could ChatGPT empower bad actors who previously wouldn’t have easy access to destructive technology? Could it speed up confrontations between superpowers, leaving little time for diplomacy and negotiation?

“There’s a series of informal conversations now taking place in the industry — all informal — about what would the rules of A.I. safety look like,” said Eric Schmidt, the former Google chairman who served as the chairman of the Defense Innovation Board.Credit...Mike Blake/Reuters
“The industry isn’t stupid here, and you are already seeing efforts to self-regulate,” said Eric Schmidt, the former Google chairman who served as the inaugural chairman of the advisory Defense Innovation Board from 2016 to 2020.

“So there’s a series of informal conversations now taking place in the industry — all informal — about what would the rules of A.I. safety look like,” said Mr. Schmidt, who has written, with former secretary of state Henry Kissinger, a series of articles and books about the potential of artificial intelligence to upend geopolitics.

The Global Race for Computer ChipsA New National Organization: The Commerce Department announced its strategy for the National Semiconductor Technology Center, a string of facilities aimed at propelling U.S. research on cutting-edge microchips.
An Industry Pioneer’s Vision: Ivan Sutherland played a key role in foundational computer technologies. Now he sees a path for America to claim the mantle in “superconducting” chips.
Tit for Tat: In a seeming stroke of retaliation over Washington’s campaign to sever China’s access to high-end chips, Beijing announced a cybersecurity review of Micron Technology, a U.S. chip maker.
CHIPS Act: The sprawling program for the U.S. semiconductor industry is foremost about national security, but it will try to advance other priorities as well. Here’s what it aims to do.

The preliminary effort to put guardrails into the system is clear to anyone who has tested ChatGPT’s initial iterations. The bots will not answer questions about how to harm someone with a brew of drugs, for example, or how to blow up a dam or cripple nuclear centrifuges, all operations the United States and other nations have engaged in without the benefit of artificial intelligence tools.

But those blacklists of actions will only slow misuse of these systems; few think they can completely stop such efforts. There is always a hack to get around safety limits, as anyone who has tried to turn off the urgent beeps on an automobile’s seatbelt warning system can attest.

Though the new software has popularized the issue, it is hardly a new one for the Pentagon. The first rules on developing autonomous weapons were published a decade ago. The Pentagon’s Joint Artificial Intelligence Center was established five years ago to explore the use of artificial intelligence in combat.

Some weapons already operate on autopilot. Patriot missiles, which shoot down missiles or planes entering a protected airspace, have long had an “automatic” mode. It enables them to fire without human intervention when overwhelmed with incoming targets faster than a human could react. But they are supposed to be supervised by humans who can abort attacks if necessary.

Patriot missiles, which shoot down missiles or planes entering a protected airspace, have long had an “automatic” mode. But they are supposed to be supervised by humans who can abort attacks if necessary.Credit...Sean Murphy/Associated Press

The assassination of Mohsen Fakhrizadeh, Iran’s top nuclear scientist, was conducted by Israel’s Mossad using an autonomous machine gun that was assisted by artificial intelligence, though there appears to have been a high degree of remote control. Russia said recently it has begun to manufacture — but has not yet deployed — its undersea Poseidon nuclear torpedo. If it lives up to the Russian hype, the weapon would be able to travel across an ocean autonomously, evading existing missile defenses, to deliver a nuclear weapon days after it is launched.

So far there are no treaties or international agreements that deal with such autonomous weapons. In an era when arms control agreements are being abandoned faster than they are being negotiated, there is little prospect of such an accord. But the kind of challenges raised by ChatGPT and its ilk are different, and in some ways more complicated.

In the military, A.I.-infused systems can speed up the tempo of battlefield decisions to such a degree that they create entirely new risks of accidental strikes, or decisions made on misleading or deliberately false alerts of incoming attacks.

“A core problem with A.I. in the military and in national security is how do you defend against attacks that are faster than human decision-making, and I think that issue is unresolved,” Mr. Schmidt said. “In other words, the missile is coming in so fast that there has to be an automatic response. What happens if it’s a false signal?”

The Cold War was littered with stories of false warnings — once because a training tape, meant to be used for practicing nuclear response, was somehow put into the wrong system and set off an alert of a massive incoming Soviet attack. (Good judgment led to everyone standing down.) Paul Scharre, of the Center for a New American Security, noted in his 2018 book “Army of None” that there were “at least 13 near use nuclear incidents from 1962 to 2002,” which “lends credence to the view that near miss incidents are normal, if terrifying, conditions of nuclear weapons.”

For that reason, when tensions between the superpowers were a lot lower than they are today, a series of presidents tried to negotiate building more time into nuclear decision making on all sides, so that no one rushed into conflict. But generative A.I. threatens to push countries in the other direction, toward faster decision-making.

The good news is that the major powers are likely to be careful — because they know what the response from an adversary would look like. But so far there are no agreed-upon rules.

Anja Manuel, a former State Department official and now a principal in the consulting group Rice, Hadley, Gates and Manuel, wrote recently that even if China and Russia are not ready for arms control talks about A.I., meetings on the topic would result in discussions of what uses of A.I. are seen as “beyond the pale.”

Of course, the Pentagon will also worry about agreeing to many limits.

“I fought very hard to get a policy that if you have autonomous elements of weapons, you need a way of turning them off,” said Danny Hillis, a computer scientist who was a pioneer in parallel computers that were used for artificial intelligence. Mr. Hillis, who also served on the Defense Innovation Board, said that Pentagon officials pushed back, saying, “If we can turn them off, the enemy can turn them off, too.”

The bigger risks may come from individual actors, terrorists, ransomware groups or smaller nations with advanced cyber skills — like North Korea — that learn how to clone a smaller, less restricted version of ChatGPT. And they may find that the generative A.I. software is perfect for speeding up cyberattacks and targeting disinformation.

Tom Burt, who leads trust and safety operations at Microsoft, which is speeding ahead with using the new technology to revamp its search engines, said at a recent forum at George Washington University that he thought A.I. systems would help defenders detect anomalous behavior faster than they would help attackers. Other experts disagree. But he said he feared artificial intelligence could “supercharge” the spread of targeted disinformation.

All of this portends a new era of arms control.

Some experts say that since it would be impossible to stop the spread of ChatGPT and similar software, the best hope is to limit the specialty chips and other computing power needed to advance the technology. That will doubtless be one of many different arms control plans put forward in the next few years, at a time when the major nuclear powers, at least, seem uninterested in negotiating over old weapons, much less new ones.

No comments:

Post a Comment