BY PATRICK TUCKERREAD BIO
SEPTEMBER 26, 2016
A new survey of existing and planned smart weapons finds that AI is increasingly used to replace humans, not help them.
The Pentagon’s oft-repeated line on artificial intelligence is this: we need much more of it, and quickly, in order to help humans and machines work better alongside one another. But a survey of existing weapons finds that the U.S. military more commonly usesAI not to help but to replace human operators, and, increasingly, human decision making.
The report from the Elon Musk-funded Future of Life Institute does not forecast Terminators capable of high-level reasoning. At their smartest, our most advanced artificially intelligent weapons are still operating at the level of insects … armed with very real and dangerous stingers.
So where does AI exist most commonly on military weapons? The study, which looked at weapons in military arsenals around the world, found 284 current systems that include some degree of it, primarily standoff weapons that can find their own way to a target from miles away. Another example would be Aegis warships that can automatically fire defensive missiles at incoming threats.
“This matches the overall theme – autonomy is currently not being developed to fight alongside humans on the battlefield, but to displace them. This trend, especially for UAVs [unmanned aerial vehicles], gets stronger when examining the weapons in development. Thus despite calls for ‘centaur warfighting,’ or human-machine teaming, by the US Defense Department, what we see in weapons systems is that if the capability is present, the system is fielded in the stay [meaning instead of] of humans rather than with them,” notes Heather M. Roff, the author of the report.
Roff found that the most common AI feature on a weapon was homing: “the capability of a weapons system to direct itself to follow an identified target, whether indicated by an outside agent or by the weapons system itself.” It’s been around for decades; many more recent AI capabilities spring from it.
On the the other end of the technology spectrum is certain drones’ ability to loiter over an area, compare objects on the ground against a database of images, and mark a target when a match comes up — all without human guidance.
Roff writes that these capabilities, which she calls autonomous loitering and target image and signal discrimination, represent “a new frontier of autonomy, where the weapon does not have a specific target but a set of potential targets in an image library or target library (for certain signatures like radar), and it waits in the engagement zone until an appropriate target is detected. This technology is on a low number of deployed systems, but is a heavy component of systems in development.”
For an indication of where AI on drones is headed look to cutting-edge experimental machines like Dassault’s nEUROn, BAE’s Taranis, and Northrop Grumman’s X-47B. Unlike General Atomics’ Predator and Reaper drones, which the military armed to take out terrorist targets in places like Afghanistan, these more advanced drones are designed for war with countries that can actually shoot back. The so-called anti-access / area denial challenge, or A2AD, requires aircraft that use stealth to slip in under enemy radar and then operate on their own over enemy territory. It’s the key thing pushing autonomy in weapons to the next level.
“This is primarily due to the type of task the stealth combat UAV is designed to achieve: defeating integrated enemy air defense systems. In those scenarios, a UAV will likely be without communications and in a contested and denied environment. The system will need to be able to communications share with other deployed systems in the area opportunistically, as well as engage and replan when necessary,” Roff writes.
At the recent Air Force Association conference outside Washington, D.C., Deputy Defense Secretary Bob Work called greater autonomy essential to U.S. military technological dominance. Citing a report from the Defense Science Board, he said, “There is one thing that will improve the performance of the battle network more than any other. And you must win the competition because you are in it whether you like it or not. And that is exploiting advances in artificial intelligence and autonomy. That will allow the joint force to assemble and operate human machine battle networks of even greater power.”
But even if the U.S. military “wins the competition” by producing the best autonomic systems, other nations may yet put AI to unexpected and even destabilizing effect. “It should be noted that the technological incorporation of autonomy will not necessarily come only from the world’s strongest powers, and the balancing effect that may have will not likely be stabilizing. Regional powers with greater abilities in autonomous weapons development, such as Israel, may destabilize a region through their use or through their export to other nations,” says Roff.
That’s not the only reason more smarts on more weapons could be destabilizing. Machines make decisions faster than humans. On the battlefield of the future, the fastest machines, those that make the best decisions with the least amount of human input, offer the largest advantage.
Today, the United States continues to affirm that it isn’t interested in removing the human decision-maker from “the loop” in offensive operations like drone strikes (at least not completely). That moral stand might begin to look like a strategic disadvantage against an adversary that can fire much faster, conduct more operations, hit more targets in a smaller amount of time by removing the human from loop.
The observe, orient, decide, and act cycle, sometimes called theOODA loop, is today in the hands of humanity when it comes to warfare. But in other areas of human activity, like high-frequency trading, it’s moved to the machines. William Roper, the head of the Pentagon’s Strategic Capabilities Office, discussed his concerns about that acceleration at the recent Defense OneTechnology Summit.
“When you think about the day-trading world of stock markets, where it’s really machines that are doing it, what happens when that goes to warfare?” Roper asked. “It’s a whole level of conflict that hasn’t existed. It’s one that’s scary to think about what other countries might do that don’t have the same level of scruples as the U.S.”
It’s also scary to think about what the United States might do if its leaders woke up in a war where they were losing to those countries.
Patrick Tucker is technology editor for Defense One. He’s also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014). Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The ... FULL BIO
No comments:
Post a Comment