Pages

18 October 2021

ESCALATION TO NUCLEAR WAR IN THE DIGITAL AGE: RISK OF INADVERTENT ESCALATION IN THE EMERGING INFORMATION ECOSYSTEM

James Johnson
Source Link

We are in an era of rapid disruptive technological change, especially in artificial intelligence (AI) technology. AI technology is already being infused into military hardware, and armed forces are continually furthering their planning, research and development, and in some cases deployment of AI-enabled capabilities. Therefore, the embryonic journey to reorient military forces to prepare for the future digitized battlefield is no longer merely the stuff of speculation or science fiction. This essay revisits Massachusetts Institute of Technology political scientist Barry Posen’s analytical framework to examine the psychological features of the security dilemma to consider how and why the novel characteristics of AI and the emerging digital information ecosystem may impact crisis stability and increase inadvertent escalation risk. Will AI-enabled capabilities increase inadvertent escalation risk? How might AI be incorporated into nuclear and conventional operations in ways that affect escalation risk? Are existing notions of inadvertent escalation still relevant in the digital age?

Escalation theorizing came into prominence during the Cold War with the development of nuclear weapons, particularly the need to conceptualize and control conflict below the level of total war. The Cold War history of nuclear weapons and escalation continues to provide the theoretical basis for escalatory strategies and undergird debates about nuclear deterrence, strategic planning, and how a conventional skirmish could become a nuclear war. On Escalation, Herman Kahn’s seminal work, conceptualizes a forty-four-rung escalation ladder metaphor, which moves from low-scale violence to localized nuclear war to conventional and nuclear attacks against civilian populations.

In Kahn’s analysis, the concept of escalation is, at its core, a fundamentally psychological and perceptual one. Similar to other related concepts such as deterrence and strategic stability, escalation relies on the actor’s unique understanding of context, motives, and intentions—especially in the use of capabilities. How actors resolve these complex psychological variables associated with the cause, means, and effects of a military attack—both kinetic and nonkinetic—remains a perplexing and elusive endeavor.

Kahn’s escalation ladder metaphor, like any theoretical framework, has limitations. Actors do not necessarily move sequentially and inexorably from the lower rungs to the higher rungs—that is, rungs can be skipped and actors can go up as well as down. Instead, there are many pathways and mechanisms between low-intensity conflict and all-out nuclear confrontation. Besides, adversaries can be at different rungs or thresholds along what Kahn describes as the “relatively continuous” pathways to war. Despite its limitations, Kahn’s escalation ladder is a useful metaphorical framework to reflect on the possible available options (e.g., a show of force, reciprocal reprisals, costly signaling, and preemptive attacks), progression of escalation intensity, and scenarios in a competitive nuclear-armed dyad.

Rethinking Escalation in the Digital Age

Barry Posen identifies the significant causes of inadvertent escalation as the security dilemma, the Clausewitzian notion of the fog of war, and offensively oriented military strategy and doctrine. Posen’s causes remain valid sources of inadvertent escalation, even in the AI-enhanced information age.

The Security Dilemma

In his seminal work on the topic, Robert Jervis defines the security dilemma as the “unintended and undesired consequences of actions meant to be defensive.” Elsewhere, Jervis describes how “many of the means by which a state tries to increase its security decrease the security of others”—that is, one state’s gain in security can inadvertently undermine the security of others. The security dilemma’s characteristics compound the likelihood of inadvertent escalation in the digital age.

First, while rational nuclear-armed states shares an obvious vested interest in avoiding an existential nuclear war, they also place a high value on their nuclear forces—both as vital national security assets and as symbols of national prestige and status.

Second, escalatory rhetoric or threats—especially in situations of military asymmetry— can easily be misperceived as unprovoked malign intent and not as a response to the initiator’s behavior, thus prompting action and reaction spirals of escalation.

Third, the state of heightened tension and the compressed decision-making pressures during a conventional conflict radically increase the speed at which action and reaction spirals of escalation unravel. With the proliferation of AI, these dynamics are compounded, reducing the options and time for de-escalation and increasing the risks of horizontal (the scope of war) and vertical (the intensity of war) inadvertent escalation.

Clausewitzian Fog of War

Inadvertent escalation risk can also be caused by the confusion and uncertainty associated with gathering, analyzing, and disseminating relevant information about a crisis or conflict—which has important implications for the management, control, and termination of war. The confusion and uncertainty associated with the fog of war can increase inadvertent risk in three ways. First, it can complicate managing and controlling military campaigns at a tactical level. Second, it can further compound the problem of offense-defense distinguishability. And third, it can increase the fear of a surprise or preemptive attack.

Taken together, these mechanisms can result in unintentional, possibly irrevocable, outcomes and thus obfuscate the meaning and the intended goals of an adversary’s military actions. Ultimately, the fog of war increases the risk of inadvertent escalation because misperceptions, misunderstandings, poor communications, and unauthorized or unrestrained offensive operations can impair the ability of defense planners to influence the course of war.

While disinformation and psychological operations in deception and subversion are not a new phenomenon, the introduction of new AI-enhanced tools enables a broader range of actors, both state and nonstate, to manipulate, confuse, and deceive using asymmetric techniques. Disinformation operations might erode credibility and undermine public confidence in a state’s retaliatory capabilities, targeting specific systems that perform critical functions in maintaining these capabilities. For example, cyber operations such as “left of launch” operations—in a revelation that echoes the disruption of Iran’s nuclear program through the Stuxnet cyberweapon—have been allegedly used by the United States to undermine Iranian and North Korean confidence in their nuclear forces and technological preparedness.

The potential utility of social media to amplify the effects of a disinformation campaign was demonstrated during the Ukrainian Crisis in 2016 when several members of Ukraine’s parliament were the victims of Russian information operations via compromised cellular phones. However, using these novel techniques during a nuclear crisis and how they might impact the fog of war is less understood and empirically untested.

During a nuclear crisis, a state might attempt to influence and shape the domestic debate of an adversary to improve its bargaining hand by delegitimizing (or legitimizing) the use of nuclear weapons during an escalating situation or by bringing pressure on an adversary’s leadership to sue for peace or de-escalate a situation. This tactic may, of course, dangerously backfire. Public pressures on decision makers might impel—especially thin-skinned or inexperienced—leaders, operating under the shadow of the deluge of twenty-four-hour social media feedback and public scrutiny, to take actions that they might not otherwise have. Moreover, a third-party actor to achieve its nefarious goals could employ active information techniques—spreading false information of a nuclear detonation, troop or missile movement, or missile launches—during a crisis between nuclear rivals to incite crisis instability.

Offensive Capabilities and Strategies

Because of a lack of understanding between nuclear-armed and non-nuclear-armed adversaries about where the new tactical possibilities offered by AI-enhanced weapon systems figure on the Cold War–era escalation ladder, the pursuit of new strategic nonnuclear weapons (cyber weapons, drones, missile defenses, precision munitions, and counterspace weapons) increases the risk of misperception. The fusion of AI technology into conventional weapon systems—whether to enhance autonomous weapons, remote reconnaissance sensing, missile guidance, or situational awareness—creates new possibilities for a range of destabilizing counterforce options targeting states’ nuclear-weapon delivery and support systems, such as cyber “kill switch” attacks on nuclear command-and-control systems.

Russia, the United States, China, and North Korea are currently pursuing a range of nonnuclear delivery systems (hypersonic glide vehicles, stealth bombers, and a variety of precision munitions) and advanced conventional weapons that can achieve strategic effects—that is, without the need to use nuclear weapons. The potential threat posed by these counterforce conventional weapons is compounded by the blurring between the employment of dual-use command-and-control systems to manage conventional and nuclear missions. Moreover, these technological advances have been accompanied by doctrinal shifts by certain regional nuclear powers (Pakistan, India, North Korea, and possibly China), which indicate a countenance of the limited use of atomic weapons to deter an attack (“escalate to de-escalate”) in situations where they face a superior conventional adversary and the risk of large-scale conventional aggression, akin to the potential use of tactical atomic weapons to counter a Warsaw Pact invasion along the inner German border during the Cold War.

Given the confluence of secrecy, complexity, erroneous or ambiguous intelligence data (especially from open-source intelligence and social media outlets), AI augmentation will likely exacerbate compressed decision-making processes and the inherent asymmetric nature of cyberspace information. For example, using AI-enhanced cyber capabilities to degrade or destroy a nuclear state’s command-and-control systems—whether as part of a deliberate, coercive counterforce attack or in error as part of a limited conventional strike—may generate preemptive “use it or lose it” situations. These risks should give defense planners pause for thought about using advanced conventional capabilities to project military power in conflicts with regional nuclear powers.

During, in anticipation of, or to incite a crisis or conflict, an actor could employ subconventional information warfare campaigns to sow division, erode public confidence, and delay an effective official response. The public confusion and disorder that followed an erroneous cell phone alert warning residents in Hawaii of an imminent ballistic missile threat in 2018 serve as a worrying sign of the vulnerabilities of US civil defenses against anyone seeking asymmetric advantages vis-à-vis a superior adversary. North Korea, for example, might conceivably replicate incidents like the Hawaii false alarm in a disinformation campaign by issuing false evacuation orders, issuing false nuclear alerts, or subverting real ones via social media in order to cause mass confusion.

During a crisis in the South China Sea or South Asia, for example, when tensions are running high, disinformation campaigns could have an outsized impact on crisis stability with potentially severe escalatory consequences. This impact would be compounded when decision makers heavily rely on social media for information gathering and open-source intelligence and are thus more susceptible to social media manipulation. In an extreme case, a leader may come to view social media as an accurate barometer of public sentiment, eschewing official evidence-based intelligence sources, regardless of the origins of this virtual voice. As an example, in the aftermath of a terrorist attack in India’s Jammu and Kashmir in 2019, a disinformation campaign—conducted by terrorists to gain the support of the population during insurgencies—spread via social media amid a heated national election. The resultant was inflamed emotions and Indian domestic political rhetoric that led to military retaliation against Pakistan, bringing two nuclear-armed adversaries close to an inadvertent conflict caused by disinformation, deception, and misperception.

This crisis provides a sobering glimpse of how information and influence campaigns between two nuclear-armed adversaries can affect crisis stability and the concomitant risks of inadvertent escalation. As seen in South Asia, the catalyzing effect of costly signaling and testing the limits of an adversary’s resolve (which did not previously exist) to enhance security instead increases inadvertent escalation risks and leaves both sides less secure.

The effect of escalatory rhetoric in the information ecosystem can be a double-edged sword for inadvertent escalation risk. On the one hand, public rhetorical escalation can mobilize domestic support and signal deterrence and resolve to an adversary, which makes war less likely. On the other hand, sowing public fear, creating distrust in the robustness of nuclear launch protocols, and threatening a rival leader’s reputation as a strategic decision maker risk escalating the likelihood of war. Domestic public disorder and confusion—caused, for example, by a disinformation campaign or cyberattack—can act as an escalatory force, putting decision makers under pressure to respond forcefully to foreign or domestic threats to protect a state’s legitimacy, self-image, and credibility. Ultimately, states’ willingness to engage in nuclear brinkmanship will depend on intelligence, mis- or disinformation, cognitive bias, and perception of, and the value attached to, what is at stake.

AI technology is already raising many questions about warfare and shifts in the balance of power, which are challenging traditional arms control thinking. How can decision makers mitigate the inadvertent escalation risks associated with AI and nuclear systems? Possible ways forward include arms control and verification, changes to norms and behavior, unilateral measures and restraint, and bilateral and multilateral stability dialogue.

Traditional arms control and nonproliferation frameworks of nuclear governance are not necessarily obsolete, however. Instead, we need to depart from conventional siloed, rigid, and stovepiped approaches and search for innovative frameworks and novel approaches to meet the challenges of rapidly evolving dual-use technology, the linkages between conventional and nuclear weapons, and the informational challenges in the new atomic age. Specific measures might include prohibiting or imposing limits on AI technology fusion in nuclear command-and-control systems, autonomous nuclear-armed missiles, and atomic weapons launch decisions.

Counterintuitively, perhaps, AI might also offer innovative solutions to revising legacy arms control frameworks or creating new ones that contribute to noninterference mechanisms for arms control verification—reducing the need for boots-on-the-ground inspectors in sensitive facilities. AI technology could also improve the safety of nuclear systems. For instance, it could increase the security and robustness of command-and-control cyber defenses by identifying undetected vulnerabilities and weaknesses. The US Defense Advanced Research Projects Agency has, for example, already begun to study the ways AI may be used to identify vulnerabilities in conventional military systems. AI might also help defense planners design and manage wargaming and other virtual training exercises to refine operational concepts, test various conflict scenarios, and identify areas and technologies for potential development.

Finally, expanding the topics and approaches for bilateral and multilateral initiatives such as confidence-building measures should include the novel nonkinetic escalatory risks associated with complexity in the AI and the digital domains (e.g., mis- and disinformation, deepfakes, information sabotage, and social media weaponization) during conventional crises and conflict involving nuclear-armed states.

No comments:

Post a Comment