By Jon Harper
The U.S. military and its foreign adversaries could soon find themselves in an interminable battle to protect their artificial intelligence systems from attack while developing offensive capabilities to go after their enemies’ AI capabilities.
Defense officials see great potential for artificial intelligence and machine learning to aid in a variety of missions ranging from support functions to front-line warfighting. But the technology comes with risks.
“Machine learning … offers the allure of reshaping many aspects of national security, from intelligence analysis to weapons systems and more,” said a recent report by the Georgetown University Center for Security and Emerging Technology, “Hacking AI: A Primer for Policymakers on Machine Learning Cybersecurity.”
However, “machine learning systems — the core of modern AI — are rife with vulnerabilities,” noted the study written by CSET Senior Fellow Andrew Lohn.
Adversaries can attack these systems in a number of ways to include: manipulating the integrity of their data and leading them to make errors; prompting them to unveil sensitive information; or causing them to slow down or cease functioning, thereby limiting their availability, according to the report.
Methods such as “data poisoning” and “evasion” are just some techniques that can lead ML platforms to make mistakes.
“In ‘data poisoning,’ attackers make changes to the training data to embed malicious patterns for the machine to learn. This causes the model to learn the wrong patterns and to tune its parameters in the wrong way,” the report explained. “In ‘evasion,’ attackers discover imperfections in the model — the ways in which its parameters may be poorly tuned — and then exploit these weaknesses in the deployed model with carefully crafted inputs.”
For example, an attacker could break into a network and manipulate the data stored within it, compromising the integrity of the data that the software relies on.
However, adversaries don’t necessarily have to break into a network or system to thwart it, the report noted. For example, attackers might not need to hack into a military drone to cause it to misidentify its targets — they could simply make educated guesses about the drone’s machine learning system model and act to exploit it.
In a so-called “evasion” operation, an attacker can make subtle changes to system inputs to cause a machine to change its assessment of what it is seeing, the study explained.
To illustrate this vulnerability, CSET cyber experts made subtle changes to a picture of Georgetown University’s Healy Hall building, a National Historical Landmark, and then fed that into a common image recognition system.
“Human eyes would find the changes difficult to notice, but they were tailored to trick the machine learning system,” the report said. “Once all the changes were made … the machine was 99.9 percent sure the picture was of a triceratops” dinosaur.
While the Healy Hall triceratops vignette might be amusing to some readers, it would be no laughing matter if, say, a military drone misidentified a hospital as a weapons depot and bombed it; or, conversely, if enemy tanks were allowed to attack U.S. troops because an adversary was able to trick an ML-equipped surveillance system into misidentifying the platforms as innocuous commercial vehicles.
The aim of another type of counter-AI operation, known as a “confidentiality attack,” is not to cause a machine learning system to make errors, but to uncover sensitive data.
To achieve this, adversaries can watch how the system responds to different kinds of inputs.
“From this observation, attackers can learn information about how the model works and about its training data. If the training data is particularly sensitive — such as if the model is trained on classified information — such an attack could reveal highly sensitive information,” the study said.
With this level of understanding about how a particular machine learning model works, adversaries could also figure out how it may be compromised, the study noted.
Technology developers and policymakers are confronted with the task of figuring out how to manage the inevitable risks associated with machine learning.
Meanwhile, the Pentagon also has incentives to develop capabilities to go after competitors’ platforms.
“The United States is not the only country fielding AI systems, and the opportunity to exploit these vulnerabilities in adversaries’ systems may be tempting,” the CSET report noted. “There are obvious military benefits of causing an enemy weapon to misidentify its targets or send an adversary’s autonomous vehicles off course. There are also the obvious intelligence benefits of stealing adversaries’ models and learning about the data they have used.”
U.S. defense officials are already thinking through these issues.
The Air Force has been in talks with the Defense Digital Service about holding an AI hacking challenge.
“We want to go into this clear-eyed and understand how to break AI,” said Will Roper, who recently served as Assistant Secretary of the Air Force for Acquisition, Technology and Logistics. “There’s not a lot of commercial investment [or] commercial research on that. Not nearly as much as there is on making AI.”
Roper, a highly respected tech guru who spearheaded a number of artificial intelligence initiatives at the Pentagon, left office in late January during the presidential transition.
More research and probing could help uncover vulnerabilities in AI and ML.
“Whatever we discover, we’ll try to fix,” Roper told reporters during a Defense Writers Group event. “Then whatever we fix, we’ll try to break. And we’ll try to break those fixes and fix those breaks. And I guess that goes on forever in what we’re calling ‘algorithmic warfare.’”
The Pentagon already has experience leveraging machine learning for intelligence operations such as Project Maven, which used the technology to help human analysts sift through hours and hours of drone footage collected from overseas battlefields.
Future plans call for deploying a variety of unmanned and autonomous systems to include robotic aircraft, combat vehicles and ships.
Roper said artificial intelligence technologies are ushering in “a new epoch of warfare.”
“The algorithms, the AI that we take into the fight, we’re going to have to have an instinct for them and they will have weaknesses that are very different than our humans and our traditional systems,” Roper said.
The military will need to develop “digital stealth” and other digital countermeasures to thwart enemy efforts to undermine U.S. artificial intelligence and machine learning capabilities, he noted, comparing the concept to how warfighters currently use stealth and electronic warfare to defeat enemy radars and jamming devices.
The Defense Department needs to accelerate its acquisitions so that it doesn’t end up fighting “tomorrow’s war with yesterday’s AI,” he added.
The military will have to find the right balance between letting “smart” machines do their thing, and keeping them on a leash with humans exercising oversight.
While officials acknowledge the risks involved in relying on artificial intelligence, the technology is also viewed by many as too useful to pass up.
“When it’s having a bad day, when an adversary’s potentially messing with it, it’s too fragile today for us to hand the reins completely to it,” Roper said. “But it’s too powerful when it’s having a good day for us not to have it there in the first place.”
The Defense Department’s AI strategy, released in 2019, calls for funding research aimed at making artificial intelligence systems more resilient, including to hacking and spoofing.
Alka Patel, head of the ethics team at the Pentagon’s Joint Artificial Intelligence Center, told National Defense that the military’s AI systems will need to be designed and engineered so they can be disengaged or deactivated if they aren’t operating as intended.
In this new era of algorithmic warfare, will the attacker or the defender have the upper hand?
“It is hard to answer this question until the field of machine learning cybersecurity settles on specific offensive and defensive techniques,” the CSET report said. “Even then the answer may not be clear, as attackers and defenders engage one another, both sides will discover new techniques.”
The study likened the situation to a “rapidly evolving cat-and-mouse game.”
Roper noted that it’s unclear what the balance of power will be.
“It could end up being that it’s so easy to break that the offensive order of AI … is always so dominant that we don’t really have to worry about it. We just have a lot of counter-AI capability and we muddy that water for both sides,” he said. “But it could be that it balances pretty well, that the countermeasures and the counter-countermeasures balance well so that as you get into a cat-and-mouse game, if you pick your plan well, you can always have a decided advantage.”
Defenders face a number of challenges. For one, traditional cybersecurity techniques don’t necessarily apply to machine learning, the CSET report noted.
“Attacks on machine learning systems differ from traditional hacking exploits and therefore require new protections and responses,” it said.
“For example, machine learning vulnerabilities often cannot be patched the way traditional software can, leaving enduring holes for attackers to exploit.”
A subtle change in an attacker’s operations can change how effective a particular defense is, the study noted. Additionally, defensive techniques that work well for a less sophisticated machine learning system might not be as effective for a more advanced system, or vice versa.
The CSET report compared AI competition to the arcade game “Whack-a-Mole” where defenders must rapidly bat down new threats that keep popping up.
“New attacks are invented and defenses are developed, and then those defenses are defeated, and so on,” the study said.
So how should policymakers and technologists approach this challenge? System-level defenses, according to the CSET study. That includes the use of redundant components and the enablement of human oversight and intervention when possible.
The report used a self-driving car scenario to illustrate how system-level defenses could avert disaster.
“A commonly cited example of an attack involves placing a sticker on a stop sign that makes it appear to autonomous vehicles to be a 45 mph sign,” it said. “Although this attack is possible and easy to perform, it only achieves a destructive effect if the car drives into a busy intersection. If the car has many ways to decide to stop, such as by knowing that intersections usually have stop signs, relying on lasers for collision avoidance, observing other cars stopping, or noticing high speed cross-traffic, then the risk of attack can remain low despite the car being made of potentially vulnerable machine learning components.”
While traditional cyber attacks won’t be going away anytime soon, algorithmic warfare is the future of cyber conflict, said James Lewis, director of the Strategic Technologies Program at the Center for Strategic and International Studies.
Biden administration officials need to continue to think about “how we develop our own tools, how we mess with other countries’ tools,” Lewis said in an interview. “Our opponents are certainly looking at more sophisticated tools” for attacking AI systems, he warned.
No comments:
Post a Comment