Pages

1 May 2021

Artificial Intelligence, Lawyers And Laws Of War

By SYDNEY J. FREEDBERG JR.

WASHINGTON: “My entire career, I’ve stood around a mapboard and the lawyer’s always right there,” said Gen. Mike Murray, who commanded troops in Afghanistan and Iraq. But in a highly automated future war of long-range missiles, swarming robots, and sensor jamming, warned the head of Army Futures Command, “you’re not going to have 30 seconds to stand around a mapboard and make those decisions.”

“Back when I was a brigade commander, even when I was commander of the Third Infantry Division in Afghanistan,” Murray recalled, “life and death decisions were being made just about every day, and it usually was around, either [a] mapboard or some sort of digital display.” Along with the staff officers for intelligence, operations and fire support, he said, one of a handful of “key people standing around that mapboard” was the command’s lawyer, its Staff Judge Advocate.

“The lawyer always got a say,” the general went on. “Is this is just a viable course of action, given the law of armed conflict?… Is this a legal response? And usually those discussions would take some time. I think in the future the opportunities to get people around the map board and have a detailed discussion — to include discussions about the legality of the actions you’re contemplating as a commander — will be few and far between.”

How will the Army solve this problem? Gen. Murray raised this question addressing a West Point-Army Futures Command conference on the law of future war, but he didn’t provide an answer. The speakers who followed didn’t seem to have an answer, either.

That’s a growing problem for the Army, which prides itself on meticulous adherence to federal law, Pentagon regulation, and professional military ethics, even in the most extreme conditions. And it’s particular tricky when artificial intelligence is involved, because AI often operates on timescales much faster than an individual human brain can follow, let alone the speed at which a formal staff process moves.

In its Project Convergence wargames last fall, Murray noted, the Army already used AI to detect potential targets in satellite images, then move that targeting data to artillery batteries on the ground in “tens of seconds,” as opposed to the “tens of minutes” the traditional call-for-fires process takes. But there was always a human in the loop, he emphasized, approving the computer-generated order before it could be transmitted to the guns.

That may not be possible with every type of target, however. Imagine, Murray said, a swarm of a hundred or more drones is inbound, flying low and jamming US sensors so it can evade detection until the last moment. “It within a human’s ability to pick out which ones need to be engaged first and then make 100 individual engagement decisions?” he asked. “Is it even necessary to have a human in loop, if you’re talking about effects against [i.e. attacking] an unmanned platform?”

For fast-moving, unmanned targets like drones or missiles, it may be necessary to take the human out of the loop and let the automation open fire against targets that meet pre-set criteria – “and in many ways we do this already,” Murray said. That’s true for the Counter-Rocket, Artillery, & Mortar (C-RAM) system used to protect many Forward Operating Bases, itself derived from the Navy’s Phalanx anti-missile gun. Automated fire is even an available mode on Patriot missiles, he said. (In fact, Patriot batteries on automated mode mistakenly killed three US and British pilots in 2003).

“Where I draw the line — and this is, I think well within our current policies – [is], if you’re talking about a lethal effect against another human, you have to have a human in that decision-making process,” Murray said. That’s the model used at Project Convergence last fall. And it’s official Defense Department policy.

Machines and humans working together will be slower than autonomous machines, but they will be faster than unaided humans, and potentially more accurate than either.

In the Cold War, Murray recalled, tank gunners trained on flash cards of NATO and Warsaw Pact equipment. When a young soldier identified friend or foe correctly 80 percent of the time, they were deemed battle-ready. After decades of AI advances, “we have algorithms today on platforms we’re experimenting with that can get to about 98.8 percent accuracy,” he said, “so actually artificial intelligence has the ability to make us more safe and make better decisions.”

But, “that algorithm just makes a recommendation,” Murray emphasized. “The trigger pull is up to that kid behind the cannon.”

The hard part, though, is getting soldiers to trust the AI enough to use it in combat– but not enough to mindlessly click “okay” to the computer’s every recommendation, a potentially deadly phenomenon known as automation bias.

No comments:

Post a Comment