Pages

25 July 2021

Deception Is the Biggest Threat to American Security

John Ferrari & Hallie Coyne

And deep fakes are just the harbingers
Deceptive data is all around us. While some of it is less harmful than others, deception in defense has the potential to undermine all the technological improvements that are planned or have been already put into practice. The results have the potential to be life-threatening and could ultimately lead to the defeat of U.S. national security systems using artificial intelligence (AI).

Take, for example, deep fakes where the face or body of a person is altered digitally in a video. In May 2021, the Congressional Research Service (CRS) published a short report on these types of deep fakes, observing that such "forgeries generated with artificial intelligence (AI) — could present a variety of national security challenges in the years to come.” While CRS emphasized the political risks of deep fakes, including eroding public trust, and the blackmailing of public officials, deep fakes are just the beginning, or should we more aptly say, a continuation of other more urgent, broader, and more dangerous challenges for the future security of the United States.

Flailing responses to the cyberattack that forced the shutdown of Colonial Pipeline, the East Coast’s most significant gasoline pipeline, emphasize the importance of undertaking risk mitigation efforts today. While the Colonial Pipeline outage disrupted markets, the company has almost no ability to retrofit effective new security measures that can withstand attacks by state actors or state-sanctioned actors due to the inherent flaws built into the internet’s routing system over forty year ago.

Had the attacks targeted AI systems for the U.S. military instead of the Colonial Pipeline, the threat could be existential, rendering our nation and our allies open to catastrophic attacks instead of just the inconvenience of a pipeline debacle. Similarly, retrofitting fixes will not be an option, particularly if we are either engaged in war or on the verge of a conflict.

Three concepts are useful for grasping the scale, scope, and character of what deception means in the context of military A.I. systems. First, Americans must confront the fact that information always has been and always will be weaponized. Second, deep fakes and the well-known explosion of fake news constitute just two versions of fake data. Focusing on those two subsets alone prevents decision-makers from appreciating the real threat posed by the exponential growth of active and passive deception across all types of data that feed national security systems. Third, creative solutions are required. The catch-all fix of making sure that human users validate the outputs of A.I. systems (“human in/on the loop”), for example, is not a feasible solution to counter the onslaught of deception operations that adversaries are likely to aim at the United States.

Nothing New

Deception is almost as old as history itself. During the siege of Troy, the Trojan horse dates back to 3,200 years ago, yet today malicious computer programs in emails are still referred to as Trojan horses. Similarly, Chinese military strategist Sun Tzu stated 2,500 years ago that “all warfare is based on deception,” a warning we should heed if for no other reason than we should assume that the Chinese military today follow his teachings. Sadly, there is no need to return to antiquity for examples of effective deception operations against America. In a report on deception during the Cuban Missile Crisis, the Central Intelligence Agency (CIA) stated that it would be 40 years before American intelligence officials understood the mass of lies that hid the Soviet Union’s deployment of missiles because it was “on a scale that most U.S. planners could not comprehend.”

Still, Americans rely on democracy with a foundation of transparency and willful compliance with rules and norms. As such, the notion of participating in massive deception operations against an adversary's A.I. systems is complicated and runs counter to ideals prioritized in the United States. The desire of Americans to demonstrate the value of truth and law-abiding behavior is commendable but poorly suited to this era of global communications where deception operations on the other side of the world will inevitably infiltrate both U.S. national dialogues and AI-enabled weapons systems. The United States can no longer afford to avoid or ignore the challenge for the sake of keeping its hands clean.

Today, the unspoken biases of U.S. processes, systems, and national security professionals are based on the repulsion of fake news, guaranteed freedom of the press, and statutes meant to ensure that government does not deceive the governed. These ideals may hamper the United States from building programs of deception aimed at adversaries and limiting Americans' ability to fully understand the extent to which deception will hinder AI systems. Despite these tendencies, the United States must acknowledge that data will be weaponized, and the military cannot unilaterally disarm by forgoing the use of deception.
What is New is the Scale of Deception

The U.S. military is undertaking efforts to name and define various types of deception, supporting the contextualization of sub-categories such as fake news and deep fakes. Joint Pub 3.13.4 defines passive deception (M-Type) as hiding something that really exists, and active deception (A-Type) as showing something that is not real. Similarly, in their book Strategic Military Deception, Katherine Herbig and Donald Daniel describe A-Type deception as noise that creates ambiguity. Imagine AI web crawlers scraping news reports about the Japanese Ambassador's presence in Washington prior to Pearl Harbor as “noise,” or the ability of bots to create today's infinite noise anywhere in the electromagnetic spectrum that can and will overwhelm our sensors. In contrast, M-Type deception provides misleading information, such as that created by the Allies in Operation Bodyguard prior to the Normandy landings in World War II. Considering how both active and passive deception might undermine national security, A.I. systems forces officials to confront four key issues that must be addressed.

How should the DOD confirm that AI systems are being trained on real data, not fake insertions? The data used to train A.I. systems crucially informs and directs their function. If adversaries can insert fake data into AI training datasets, the efficacy of the entire AI system will be undermined. This also forces a second question: how should the United States be preparing or endeavoring to access and undermine the AI training data of adversaries? As we move to compromise our enemies’ AI systems, we will also need to address the risk of those systems “misbehaving” in unknown ways. This should inform how America tests its own AI systems and considers the opportunity cost of feeding fake training data to adversaries.

How should the DOD teach AI systems to “not be surprised”? In the 1973 Yom Kippur war, the Egyptian military successfully convinced the Israelis that their military preparations were connected to large-scale routine exercises, which they conducted twice a year since 1968. The Egyptians essentially created an alternative truth to retain the element of surprise. Accordingly, military AI systems must somehow be trained to “not be surprised” by well-planned feints. However, even if we do succeed in training our A.I. systems to avoid such surprises, human decision-makers must be willing to unleash AI weapons in the face of ambiguity.

How should the DOD protect AI systems from overwhelming noise and data? In the public and commercial space, denial of service attacks flood computer servers with more requests than they can handle. Were adversaries to levy a similar attack against a military AI system, the consequences could easily be devastating. While it is inconvenient to be without a commercial website for a few hours during one of these attacks, what happens when the sensors of an autonomous Navy ship are overloaded? It will always be easier to create and send data than it will be to process it, so advantages reside with the attacker.

Reliance on the fallback of "human in/on the loop" as a failsafe is probably a false sense of comfort. While the “human in/on the loop” is meant to protect against bad AI system behavior with a human validation of the outputs, military deception exists at the intersection of behavior and data. In the past, deception operations targeted the behavior of the human decision-maker. Going forward, however, adversaries will be able to target both the behavior of AI systems (as they make decisions on their own) and the actions of the “human in/on the loop.” With the exponential growth of data, no human will be able to sift through the data inputs or spot deceptive data.

Furthermore, relying on the human validation of AI system output sacrifices valuable decision time. Adversaries will not hesitate to seize split second advantages. A useful analog is high speed trading in the commercial world, where fractions of milliseconds are the difference between securities being traded or not traded. As ever faster and more advanced weapons such as hypersonic missiles become commonplace, AI systems and decision-making in milliseconds will be the difference between success and failure. Are we ready for this acceleration in decision-making?

The Stakes Are Increasing

If the U.S. military continues to procure fewer weapons platforms due to a belief that AI enabled systems will provide an advantage, the associated risks must be understood as well. These AI enabled weapons systems can and will become the targets of deception that could render them useless. With the Navy building autonomous ships, the Army buying robots, and the Air Force already operating the equivalent of a flying computer with the F-35 Joint Strike Fighter, future deception of AI systems to compromise such advanced platforms could lead to our defeat. Again, bots today can overwhelm A.I. sensors and data collectors, giving the advantage to the deceiver.

The Department of Defense cannot and should not walk away from AI enabled systems on the battlefield. AI systems are here today, and U.S. adversaries, as well as non-state actors, will surely possess them as well. Both the Russians and the Chinese have openly stated that subversion, deception, and misinformation are part of a perpetual state of information warfare. They should be taken at their word. Publicly available records of such nefarious behavior are readily available, with recent examples including Russian intelligence services publishing false information questioning the safety and efficacy of COVID-19 vaccines, along with China’s prominent use of bots to artificially inflate the social media impact of Chinese diplomats and state media.

The Department of Defense and policymakers in Congress must take immediate action to address the challenge. The Pentagon should create AI deception "red teams” to fool our systems first. Legislation can be updated to allow for a more robust use of AI deception in offensive operations. Acquisition processes should ensure deception-mitigation is built in from the start for military AI systems. We cannot try to retrofit deception-mitigation into AI systems. It will prove a fool’s errand, as demonstrated by today’s failing efforts to redesign “security-mitigation” into the internet.

No comments:

Post a Comment