By DAVE MAKICHUK
It’s a simple fact, says General John “Mike” Murray, we’re going to have to learn to trust artificial intelligence in the battlefield.
And that means, the rules governing human control over artificial intelligence might need to be relaxed.
Speaking from Austin, Texas, at The Future Character of War and the Law of Armed Conflict online event, Murray provided a future battle scenario involving the rapid advance of artificial intelligence in the US military and the ethical challenges it presents.
“If you think about things like a swarm of, let’s say a hundred semi-autonomous or autonomous drones, some lethal, some sensing, some jamming, some in command and control — think back to the closing ceremony of the Seoul Olympics.
“Is it within a human’s ability to pick out which ones have to be engaged and then make 100 individual engagement decisions against a drone swarm?” said Murray, Commander, Army Future Command (AFC).
“And is it even necessary to have it a human in the loop, if you’re talking about affects against an unmanned platform or against a machine.
“Once you acquire a drone swarm, and they are three kilometers out, you’re not going to have 30 seconds to stand around a mapboard and make those decisions.”
According to Murray, a 39-year veteran of the US Army, this ethical debate is not going on in Russia or China, or anywhere else.
It’s also a growing problem for the Army, which prides itself on meticulous adherence to federal law, Pentagon regulation, and professional military ethics, even in the most extreme conditions.
And when artificial intelligence is involved, it’s particularly tricky, because AI often operates on timescales much faster than an individual human brain can follow, let alone the speed at which a formal staff process moves.
When Murray was a brigade commander in Afghanistan, he recalled that “life and death decisions were being made just about every day, and it usually was around either (a) mapboard or some sort of digital display.
One of those people standing around that mapboard was the command’s lawyer, it’s Staff Judge Advocate, who “always got a say.”
So what does this mean for Pentagon futurists and war planners seeking to prepare for and anticipate enemy attacks and threats in coming years?
“When you talk about things like decision dominance and the speed and the increased ranges that we’re working on — this is not something you want to be second or third in,” Murray warned.
“What I like to tell people is now is the time to be having these debates, because ultimately, as the United States Army, we operate under policy and we are bound by policy that is established by our lawmakers.”
Nor are there any easy answers, whether a human should be in the loop, on the loop or off the loop. Says Murray, “It’s more nuanced than just that simple question.”
The general recounted his days in a mechanized unit, to compare the “old school” Army with present day capabilities.
Gen. John Murray, head of Army Futures Command, says that humans may not be able to fight swarms of enemy drones, and that the rules governing human control over artificial intelligence might need to be relaxed. Credit: US Army.
“I remember as part of the skills test we used — we probably still do use flash cards, just like I learned my math tables.
“And it’s a T-72, it’s a T-80, it’s a T-90, it’s a Sheridan, it’s an Abrams tank it’s a Chieftain, it’s a Challenger, it’s a Merkava.
“If a gunner got 80% correct, we put that 19-year-old young man … on a 120 millimeter smooth bore cannon and turned them loose.
“We have algorithms today on the platforms we’re experimenting with, they can get to about 98.8% accuracy, right?”
According to Murray, artificial intelligence actually has the ability to make better decisions about what is or what is not a valid target — with a human in the loop — because that algorithm just makes a recommendation.
It automatically scans, it identifies and makes a call — just like some of the facial recognition software that exists today.
“I think that’s what the power of artificial intelligence is … it’s about enabling human decision makers, and I think that leads for the near term and probably far into the future when that decision involves taking another human life.
“If you’ve been to Iraq and Afghanistan, and you’re familiar with called C-RAM (Counter-Rocket, Artillery, & Mortar) … I mean C-RAM engages off a radar hit, and it shoots down missiles, rockets and artillery shells.
“And if it senses a conflict in its radar image, it shuts down then re-engages when that conflict goes away. Our Patriots, our air defense systems, have the ability to be put on auto engagement, and it’s a human decision to take it off auto engagement.
“This is open for debate, but I fundamentally believe that there are cases, nuanced cases where humans just won’t be able to keep up with the threat that’s presented.
“Where I draw the line — and this is, I think well within our current policies – [is], if you’re talking about a lethal effect against another human, you have to have a human in that decision-making process,” Murray said.
In the larger picture, the general believes it’s not just about the technology, or even building trust in these complex systems, it’s about having the capability and readiness, so that, “our opponents wake up each and every day saying today’s not the day.
“They just, they don’t want to take that on, you know, they’re not certain today’s the day that they could win. So that’s, that’s really what AFC is about.”
To date, the C-RAM Intercept LPWS capability is credited with more than 375 successful intercepts of rockets and mortar rounds fired at high-value theater assets, with no fratricides or collateral damage. Credit: US Army.
In its Project Convergence wargames last fall, Murray noted, the Army already used AI to detect potential targets in satellite images, then move that targeting data to artillery batteries on the ground in “tens of seconds,” as opposed to the “tens of minutes” the traditional call-for-fires process takes.
But there was always a human in the loop, he emphasized, approving the computer-generated order before it could be transmitted to the guns.
“One of the things that I always say when you talk about the future, first thing you have to do is admit you’re probably going to be wrong,” he said.
“But … you just have to be more right than wrong. And as we think about this, we think about things like hyperactivity, things occurring at incredible speed.
“We think of things like sensor saturation. We think about things like the amount of information that’s going to be available to a commander.
“It’s only going to exponentially increase over time in terms of how much information the commander is going to have to have to consider.”
There is no doubting that machines have faster reflexes, and the ability to keep track of several things at once, and are not troubled by the fatigue or fear that can lead to poor decisions in combat.
The ability to hide from the amount of sensors that are going to be on the future battlefield — from space, from air and on the ground — is going to be almost impossible, Murray adds.
So what is the future concept for the United States Army as part of the joint force fighting alongside its closest allies and partners?
“If you think about a sensor rich battlefield, and you think about the speed at which decisions are going to have to be made … then you hear General McConville, who talks about five key words,” he said.
“Speed, range and convergence — and that’s convergence of effects across all five war-fighting domains — and decision dominance. Those are the things that we need to achieve overmatch and maintain overmatch going into the future.
“And speed is really interesting from a war-fighting perspective. Speed is really an interesting concept and really, because this is what we get paid to do, is how fast we can deliver lethal effects, in not only a legal, but ethical way.
“How can algorithms, artificial intelligence, machine learning and eventually quantum increase that level of speed.
“There will be mistakes made absolutely. Just like the chaos of war, there will always be mistakes made.”
But not all US allies are on the same page, when it comes to autonomous weapons. European regulators have taken a different route entirely.
Recently, the European Parliament ruled: “The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgment, in line with the principles of proportionality and necessity.”
In other words, autonomous weapons making their own decisions should be outlawed.
And while debate continues to swirl around this topic, at this rate, large-scale AI-powered swarm weapons may be used in action before the debate is concluded.
The big question is which nations will have them first.
(Editor’s note: Patriot batteries on automated mode mistakenly killed three US and British pilots in 2003).
— with files from National Interest, Forbes Magazine, The Lieber Institute at West Point and Military.com
No comments:
Post a Comment