Pages

15 July 2018

Information Operations are a Cybersecurity Problem: Toward a New Strategic Paradigm to Combat Disinformation

by Jonathon Morganand Renee DiResta

Disinformation, misinformation, and social media hoaxes have evolved from a nuisance into high-stakes information war. State actors with geopolitical motivations, ideological true believers, non-state violent extremists, and economically-motivated enterprises are able to manipulate narratives on social media with ease, and it’s happening each and every day. Traditional analysis of propaganda and disinformation has focused fairly narrowly on understanding the perpetrators and trying to fact-check the narratives (fight narratives with counter-narratives, fight speech with more speech). Today’s information operations, however, are materially different – they’re computational. They’re driven by algorithms and are conducted with unprecedented scale and efficiency. To push a narrative today, content is quickly assembled, posted to platforms with large standing audiences, targeted at those most likely to be receptive to it, and then the platform’s algorithms are manipulated to make the content go viral (or at least, to make it easily discoverable). These operations are exploiting weakness in our information ecosystem. To combat this evolving threat, we have to address those structural weaknesses…but as platform features change and determined adversaries find new tactics, it often feels like whack-a-mole. It’s time to change our way of thinking about propaganda and disinformation: it’s not a truth-in-narrative issue, it’s an adversarial attack in the information space. Info ops are a cybersecurity issue.

As the American Enterprise Institute’s Phillip Lohaus put it, “We tend to think of our cyberdefenses as physical barricades, barring access from would-be perpetrators, and of information campaigns as retrograde and ineffective. In other words, we continue to focus on the walls of the castle, while our enemies are devising methods to poison the air.” When lawmakers and business leaders discuss “cyber attacks,” they’re generally thinking of network intrusions and exfiltration of data — for example, password phishing, malware, DDOS attacks, and other types of exploits or sabotage that target specific devices or networks. Information warfare, by contrast, is an attack on cognitive infrastructure, on people themselves, on society, and on systems of information and belief. Its targets are diffuse and widespread. There are best practices and frameworks for tackling and preventing cybersecurity attacks: identification and management of vulnerable infrastructure, building a defensive environment around that infrastructure, detecting and analyzing all anomalous events on the network, responding to actual attacks, improving those defensive measures, and recovering from successful attacks. There isn’t much out there for dealing with information warfare, and that gap is leaving democratic societies vulnerable.

Take, for example, a troubling statistic from a recent MIT study: on Twitter, lies are 70% more likely to be retweeted than facts. What’s more, a false story reaches 1,500 people six times quicker, on average, than a true story. And while false stories outperform the truth on every subject—including business, terrorism and war, science and technology, and entertainment—fake news about politics regularly does best. This is why hyper-partisan, political propaganda is a popular tactic in information warfare. The goal is not to fool people into believing any one individual lie. It’s to overwhelm individuals’ ability to determine what’s true, to create chaos, and to undermine the social institutions that we rely on to convey and evaluate information. Widespread dissemination, repetition, and reinforcement of a message to a receptive audience is what matters — not the narrative. Any target is fair game: high-profile leaders, businesses, candidates in elections, fringes of a political party, or other economic, social and political groups.

Research vs. Practice

Information operations aim to push false narratives across the whole ecosystem at once. In fact, oftentimes the problem is only visible when examining how the system functions as a whole — any given node in the network of social information appears to be functioning correctly (just like in the case of the best exploits), when in reality the node is naively carrying the attacker’s payload. Since it’s a systemic problem, we need to develop a holistic view of the social ecosystem. Over the last few years, researchers have learned a nontrivial amount about how misinformation spreads online. There are some fairly typical pathways that we’ve gotten good at tracking, understanding, and predicting. It’s an arms race, so tactics constantly evolve. Nonetheless, we have a pretty good sense of how these attacks are conducted.

In the meantime, the social platforms are also getting better at detecting information operations – but mainly within their own walled gardens. This is a strategic gap. The problem is systemic, but they’re not looking at the entire ecosystem. This means that the attack is detected potentially quite late in the game, and, as a result, the platforms find themselves needing to do damage control, to try to stop a false narrative and then subsequently respond to allegations of censorship and incompetence. Reacting to false narratives, once underway, is incredibly difficult. It’s far better to prevent them from starting.

Two potential complementary capabilities could work together to solve these problems: outside researchers and tech companies. The problem is that the outside researchers don’t have the visibility into user actions that Facebook, Twitter, and other large tech platforms have at their fingertips. Those companies have a substantial amount of the data necessary to understand how these narratives are influencing users, plus to what extent and under what conditions the false information is being absorbed. But on the flip side, the platforms have visibility only looking onto their own garden. External researchers can see activity across the social ecosystem as a whole. Governments, meanwhile, have access to information about geopolitical threats. Establishing an ISAC, or Information Sharing and Analysis Center, would be useful to help companies and vetted researchers share threat information. ISACs already exist in industries ranging from health, to financial services, to aviation. And traditional cyber-security “pentesting,” in which external researchers try to exploit systems to identify vulnerabilities before the adversary does, would be highly useful to the platforms as they make changes to features.

Narrative solutions to information war — refuting false statements one by one, or trying to counter-propagandize — are ineffective and inefficient in a vast many cases. They’re necessary tools to have and to develop, but we should try to preempt incidents in which they’re ever required. The cybersecurity model — including identifying patterns of infected nodes in the information distribution network, and shutting down or quarantining the infected area — facilitates that ability to preempt.

The online information system itself is under attack; a piecemeal solution doesn’t address a systemic problem. In the economic realm, businesses are being targeted; brand equity is being compromised. Industries from entertainment to agriculture to energy have become the focus of both state-sponsored strategic interest as well as economically-motivated private actors. Yet well over a year since election 2016, we’re still dependent on rudimentary responses and half-measures like trying to moderate content a bit better, or requiring ID verification to run ads. We are bringing the proverbial knife to a gunfight. It’s time to move toward establishing the type of partnerships that exist when dealing with infiltration efforts and more traditional cybersecurity risks. It’s critical for individuals, for industry, and for democracy to move toward a new strategic and collaborative paradigm.

No comments:

Post a Comment