Zac Rogers
Information warfare is, at base, a contest of narratives. Not merely collections of competing facts or data points arrayed against one another, narratives are stories that imbue facts and other forms of information with meaning. Crucially, they do not float freely—certain conditions must exist in the information environment in order for narratives to be sustained and transmitted over space and time. The contemporary information environment is host to conditions increasingly hostile to this process. Anecdotally, and as anyone awake for the first half of 2020 can attest, narratives transiting the information environment are now so numerous, and blink into and out of existence so rapidly, that they no longer exist in the impactful way they previously did. The net effect of the fragmentation and disutility of the information environment is not merely one of many more contested narratives. It is of no narratives.
Digital media introduced a new scale, pace, and pattern to human communication, and, in this way, altered how the world is perceived. With regards to scale, we encounter an unprecedented amount of information about the world at large through digital media. With regards to pace, we encounter this information with previously unknown and unrelenting immediacy. And, with regards to pattern, we encounter it both in novel social contexts and in a form that bears greater resemblance to a database than a story.
If the very conditions in which narrative is sustained and propagated are extinguished, and if we understand information warfare (IW) as a contest of narratives, where does this leave IW?
This situation was foreseen. An argument broke out in the late 1990s in the US national security, intelligence, and defense community, pitting those who believed IW to be the future of war against a more skeptical cohort, who not only saw tactical and technical obstacles to the successful use of IW, but also feared a strategic-level problem. They argued the United States and its allies, many of whom remained open, democratic, convention-based societies, stood to lose much more than they would gain from allowing, or enabling via neglect and mishandling, the information environment to become a zone of mass-targeted, multi-layered manipulation. And further, the debasement of the information environment would render efforts to influence an adversary with measures short of war sharply diminished. The irony of the age of information would be that it could herald the end of influence.
Peter Feaver flagged this concern in 1998, writing:
While IW is promoted as a means of greatly increasing the quantity and quality of information available to the national command authority (NCA) in a war or crisis and thus apparently facilitating strategies of coercion, in fact the net effect of a move to an IW environment is likely to decrease the confidence of the NCA on both sides in their own information, thus complicating efforts to coerce an opponent short of traditional war.
He also spotted an aspect of the debate now biting the national security community hard. Feaver saw in advance how a state’s actual IW capability and associated activities would quickly become irrelevant. “This problem,” he writes, “may arise simply if one of the parties to a contest believes that the other side has a robust IW capability, whether or not the other side does in fact have one (emphasis added).” The classic security dilemma, cast through the lens of IW, has conspired with trends in commercial activities to render the information environment not only devoid of a modicum of trust, but in fact a hive of paranoia.
That is not to say states and their proxies are not conducting malign activities in the information environment—they are and their actions have varying effects. But the temptation to suspect the other of wildly elaborate influence activities, and the rush to “win” in games of manipulation, gives way to hubris. The threat of an enemy at the gates can pale in comparison to the damage done by the monster under the bed. The shift of the contemporary battlespace to a society-centric one brought the implications of the fragmenting information environment home. In terms of debilitating the polity, open democracy falls harder at these obstacles than authoritarian systems, in which a level of fear-driven paranoia is normal operating procedure. Autocracies are not immune from the perils of trying to control information in this fragmented information environment, but coercive authority is less reliant on faciliatory conditions. It falls back on Hobbesian calculations of power and security more readily than do the authoritative institutions of open society, which depend for their legitimacy on participatory consent. The consent of the participant is, at base, an investment in a sustainable narrative.
While the actions taken or not taken by the national security communities around the world factor heavily in this outcome, it pays to look more closely at what has destroyed the conditions for a sustainable narrative. Greatly exacerbating the deterioration of the information environment has been the activities of commercial actors. The rise to dominance of an internet business model based on behavioral manipulation, and the deployment across the vast online ecosystem of predictive algorithms that steer, direct, and “nudge” human behavior, has led to what Matthew B. Crawford describes as a politics of anticlericalism. Likened to the aftermath of the French Revolution, a public convinced that untrustworthy elites are conspiring to control and exploit them are matched in the intensity of their paranoia only by the elites themselves, who are increasingly convinced an anxious public is plotting their overthrow and dispossession. Both parties are terrified of what democratized technology can do. Both parties can be seen as rapidly instituting the terms of their own paranoid visions.
In fact, this fear is strangely misplaced. Humans have both more say and less control over the effects of the technologies they interact with than is commonly assumed. A curious feature of contemporary techno-politics is how preferable it is to view technology at its extremes—either neutral and the instrument of good or bad actors, or deterministic and a force in its own depersonalized right. The truth lies between these poles. Technology is never neutral—it has an agency of its own, often more alien and inimical to human agents than popularly conceived. At the same time, digital technologies in particular host certain fixed conveyances and obstructions that channel, constrain, and magnify aspects of human agency while excluding or sidelining others. The upshot is that rushing to deploy and scale technological regimes is fraught with unpredictable implications. This conclusion sits uncomfortably with the fact that, even as Feaver and others were writing about the risks, the popular notion that private sector innovation was the West’s killer app when it came to strategic competition was the received wisdom of the age.
The reality has turned out to be quite different.
Dr. Zac Rogers is Research Lead at the Jeff Bleich Centre for the US Alliance in Digital Technology, Security, and Governance at Flinders University of South Australia. His research combines a traditional grounding in national security, intelligence, and defence with emerging fields of social cybersecurity, digital anthropology, and democratic resilience.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
No comments:
Post a Comment