Pages

30 August 2016

DOD Science Board Recommends “Immediate Action” to Counter Enemy AI

BY PATRICK TUCKERREAD BIO
AUGUST 25, 2016 

Pentagon scientists worry that the U.S. could be on the losing side of a AI arms race. 

The Defense Science Board’s much-anticipated “Autonomy”study sees promise and peril in the years ahead. The good news: autonomy, artificial intelligence, and machine learning could revolutionize the way the military spies on enemies, defends its troops, or speeds its supplies to the front lines. The bad news: AIin commercial and academic settings is moving faster than the military can keep up. Among the most startling recommendations in the study: the United States should take “immediate action” to figure out how to defeat new AI-enabled operations.

In issuing this warning, the study harks back to military missteps in cyber and electronic warfare. While the Pentagon was busy developing offensive weapons, techniques, plans, and tricks to use against enemies, it ignored U.S. equipment’s own vulnerabilities.

“For years, it has been clear that certain countries could, and most likely would, develop the technology and expertise to use cyber and electronic warfare against U.S. forces,” the study’s authors wrote. “Yet most of the U.S. effort focused on developing offensive cyber capabilities without commensurate attention to hardeningU.S. systems against attacks from others. Unfortunately, in both domains, that neglect has resulted in DoD spending large sums of money today to ‘patch’ systems against potential attacks.”

That cycle could repeat itself in the field of AI, says the study. 

To counter the threat, the study says, the undersecretary of defense for intelligence should “raise the priority of collection and analysis of foreign autonomous systems.” Take that to mean figuring out what China, Russia, and others can do and will soon be able to do with artificial intelligence.

Meanwhile, the Pentagon’s office of acquisition technology and logistics should gather together a community of researchers to run tests and scenarios to discover “counter-autonomy technologies, surrogates, and solutions” — in other words, practice fighting enemy AI systems. This community should have wide discretion in conducting research into commercial drones, software, and machine learning.

“Such a community would not only explore new uses for autonomy, counter-autonomy, and countering potential adversary autonomy, but also more realistically inform what the tactical advantages and vulnerabilities would be to both the U.S. and adversaries in adopting or adapting commercially available technology,” the study says.

Just as over-reliance on information technology has led to new weaknesses, so autonomy, too, is not a silver bullet. The study names a handful of “opportunities to limit or defeat the use of autonomy against U.S. forces.”

They include “using deception to confound rules-based logic” or simply overwhelming the AI’s sensor inputs. In most settings, the human brain can differentiate signal from noise far more capably than any human-written program.

The study reiterates the importance of human-decision making, but offers that the greatest potential for autonomy is in software that learns or adapts on its own, with little to no human guidance. When, if ever, is it safe to put an autonomous learning system like that in charge of a howitzer? The study says that the Defense Department doesn’t yet have the means to even ask the question.

“Current testing methods and processes are inadequate for testing software that learns and adapts,” it reads. Better testing procedures, particularly in virtual environments, will be key to getting the most out of next-generation artificial intelligence.

The United States faces a special ethical burden in how it develops and uses autonomy. The military faces pressure – both internally and from outside groups – to limit the use of autonomy in weapons. That’s less true in China and Russia; the latter of which boasts that it has tested lethal autonomous ground robots as guards for missile sites and is developing a crewless versionof the Armata T-14 tank.

“While many policy and political issues surround U.S. use of autonomy, it is certainly likely that many potential adversaries will have less restrictive policies and [concepts of operation] governing their own use of autonomy, particularly in the employment of lethal autonomy. Thus, expecting a mirror image ofU.S. employment of autonomy will not fully capture the adversary potential,” notes the study.
Patrick Tucker is technology editor for Defense One. He’s also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014). Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The ... FULL BIO

No comments:

Post a Comment