By THERESA HITCHENS
Militaries May Be Rushing To Failure, Says Frederick Chang, former NSA Research Director
WASHINGTON: As the US and other countries scramble to develop artificial intelligence (AI) solutions for military applications, the failure fix cyber vulnerabilities is teeing up a rush to failure, senior US and UAE AI gurus worry.
Frederick Chang, former director of research at the National Security Agency under President George W. Bush, told an Atlantic Council conference earlier this week that there just has “not been a lot of work at the intersection of AI and cyber.” Governments are just “beginning to understand some of the vulnerability of these systems,” he said. So, as militaries rapidly push to deploy systems they risk “increase the size of the attack surface” and create more problems than they solve.
Failure by governments to take proactive measures to ensure the security of AI systems “is going to come back to bite us,” Omar Al Olama, minister of state for artificial intelligence for the United Arab Emirates, warned. “Ignorance in government leadership” is leading to deployment of AI “for AI’s sake” — not because it is needed or is a wise thing to do. “Sometimes AI can be stupid,” he said. Olama stressed that following the traditional commercial model of patching cybersecurity vulnerabilities after the fact would not work when building AI systems, because it “might be too late” for the security of nations and their citizens.
Chang explained that there are three major ways to attack machine-learning systemsthat researchers have not yet figured out how to thwart:
“Adversarial inputs” that can systematically fool a system’s detector – something known as a “STOP sign attack” after an experiment in which researchers fooled a self-driving car by using masking tape to alter stop signs.
“Data poisoning” where an adversary might “alter data on which a system is trained” and cause its basic algorithm to reach wrong conclusions;
“Model stealing attacks” where adversaries infiltrate a system to figure out how to use its own operating system to thwart its functionality.
Col. Stoney Trent, chief of operations at DoD’s Joint Artificial Intelligence Center (JAIC), agreed that education of leaders about the need to address cybersecurity in AI — and about the benefits and risks of AI in general – is needed. Another problem, Trent noted, is that there are few “testing tools and methods” to make sure AI systems work as they are supposed to and are not vulnerable to hacking. This is because in the commercial world spending time on testing is seen as a market risk, he explained. Thus, one of JAIC’s tasks is to encourage development of such tools.
Cyberspace is one of the three “national mission initiatives” underway at JAIC, which stood up in June 2018 “to accelerate delivery of principally human-centered AI” across military mission areas. Trent said the effort “is not a place for the weak of heart,” noting a number of barriers to his mandate to “accelerate delivery of human-centric AI” systems. These include technical barriers such as the need to “curate and categorize” data and proper problem scoping. The most difficult ones are not technical, but cultural. For example, he said DoD and service policies/practices regarding data sharing are a big problem. Another barrier is the tendency for development to take place in stovepipes resulting in bureaucratic resistance to cross-integration. “I haven’t seen any evidence of it [integration] being done well in the military,” he said wryly.
No comments:
Post a Comment