Danielle Cave and Jake Wallis
The Covid-19 pandemic has caused unique societal stress as governments worldwide and their citizens have struggled to work together to contain the virus and mitigate its economic impact. This has been a trying time for democracies, testing the capacity of democratic governance to mobilise state and citizenry to work together. It has also tested the integrity of open information environments and the ability of these environments to deal with the overlapping challenges of disinformation, misinformation, election interference and cyber-enabled foreign interference.
Covid-19 has spurred the world into a new era of disinformation where we can see the daily erosion of credible information. Individuals, organisations and governments are increasingly fighting for the value of facts. But the global information environment was home to bad-faith actors long before the pandemic hit, from states interfering overseas or misleading their own populations with targeted disinformation to conspiracy groups like QAnon and alt-right extremist groups. Some of these groups have leveraged legitimate public concerns related to the pandemic, vaccine rollouts and issues like data privacy to build new conspiracy theories, and Covid-19 has provided them with a bigger platform to do so.
Relationships between governments and social media platforms are increasingly strained. Divisions are deepening about how to best balance free expression while dealing with the public harms caused by mis- and disinformation and speech that incites violence or hatred, and how to best tackle rapidly emerging issues such as the proliferation of manipulated content and the risks caused by increasingly sophisticated deep-fake technologies that could mislead and erode trust in institutions.
Policymakers are also increasingly frustrated at seeing authoritarian states, particularly China and Russia, leverage US social media networks and search engines to project propaganda and disinformation to an increasingly global audience. This is particularly perplexing, as the Chinese state, for instance, bans these same platforms at home and both limits access to and censors foreign embassy accounts on Chinese social media platforms.
The online ecosystem allows a range of state and non-state actors that are already manipulating the information environment for strategic gain to shift from exploiting the pandemic to disrupting and undermining trust in Covid-19 vaccine rollouts. This shift will only further tear at the already tense relationship between democratic governments and the major technology platforms. Anti-vaccination, conspiracy-infused misinformation disseminated across social media has mobilised public opposition to vaccine programs in several countries. January’s Capitol Hill riot in Washington demonstrated the potential for online mobilisation to transfer offline and manifest as violent civil unrest.
This is not the public square we want or need, especially as the world seeks to return to some version of normality in 2021.
From elections and state actors to 24/7 cyber-enabled interference
Governments have had more success at raising public awareness about online election interference than other forms of such intrusion. Since the Russian meddling in the 2016 US election, democracies have been concerned that the legitimacy of their mandates could be tarnished through election interference, although this preoccupation has distracted from authoritarian states’ broader efforts to shape the information environment.
ASPI’s research on cyber-enabled interference targeting electoral events has identified two predominant vectors: cyber operations, such as denial-of-service and phishing attacks to disrupt voting infrastructure and/or to target electronic voting; and online information operations that attempt to exploit the digital presence of election campaigns, voters, politicians and journalists. Together, these two attack vectors have been used to seek to influence voters and their turnout at elections, manipulate the information environment and diminish public trust in democratic processes.
ASPI’s research identified 41 elections and seven referendums between January 2010 and October 2020 where cyber-enabled election interference was reported. There has been a significant uptick in this activity since 2017, with Russia the most prolific state actor engaging in online interference, followed by China (whose cyber-enabled election interference activity has increased significantly since 2019), Iran and North Korea.
Following several high-profile incidents of election interference, there is now a proliferation of multi-stakeholder forums designed to coalesce public and policy attention around malign online activity leading up to and during elections. But an exclusive focus on the interference that surrounds elections is problematic. Information operations and disinformation campaigns—that exploit the openness of the information space in democratic societies—are happening every day, across all social media platforms.
In the Philippines, researchers and investigative journalists have repeatedly shown how the Duterte administration has continued to rely on the influence-for-hire market of bloggers, trolls, inauthentic accounts and online influencers to create and promote pro-government content and help distract citizens from issues such as the government’s handling of Covid-19. Social media platforms are showing a growing willingness to publicly attribute such activity. In September, Facebook linked the Philippines’ military and police to the removal of a network of domestically focused fake profiles, consisting of 20 Instagram accounts, 57 Facebook accounts and 31 Facebook pages.
And it’s not just states that spread disinformation. News outlets, fringe media and conspiracy sites—some with significant global reach—are also guilty of deliberately misleading their audiences. For example, in December 2019, Facebook took down more than 800 accounts, pages and groups linked to the conservative, Falun Gong–affiliated Epoch Times for misrepresentation and coordinated inauthentic behaviour.
Governments that shift their attention to these issues only in the lead-up to and during an election miss the bigger strategic picture as malign actors consistently target the fissures of social cohesion in democracies. Some strategic actors have aspirations that are much more global than influencing an individual country’s election outcome. While governments have spent the last few years (re)building their capabilities to counter foreign interference, they are struggling to handle the different set of complicated challenges—from online attribution and enforcement, to protecting citizens from harassment and threats from foreign actors—posed by cyber-enabled foreign interference outside of election time. One issue is that, unlike with traditional foreign interference, the responsibility for action is distributed across the platforms and government agencies. In many countries, unless there’s an election to focus on, government leadership has fallen mainly down the cracks between intelligence, policing and policy agencies.
The Chinese state’s flourishing interference and disinformation efforts
Given authoritarian regimes’ limited capacity to absorb social unrest peacefully, including in cyberspace, the pandemic has threatened stability. The emergence of Covid-19 from Wuhan created the risk of domestic political instability for the Chinese Communist Party. The party-state’s international standing was endangered by the spread of the virus and the resulting global economic disruption.
So how did the CCP respond to this challenge as Covid-19 spread? It threw itself into a battle of information and narratives, much of which played out online and continues to evolve today. At home, it suppressed and censored information about the virus. Open-source researchers and citizen journalists in China who had been collecting and archiving online material at risk from censorship were detained and had their projects shuttered.
China’s censors also sent thousands of confidential directives to media outlets and propaganda workers, curated and amended trending-topics pages, and activated legions of inauthentic online commentators to flood social sites with distracting commentary. One directive from the Cyberspace Administration said the agency should control messages within China and seek to ‘actively influence international opinion’.
The effort to influence international opinion, which remains ongoing, relied on a very different toolkit to the one wielded at home. US social media networks were central, providing the ideal platform for China’s ‘wolf warrior’ diplomats, state media outlets, pro-CCP trolls and influencers, and inauthentic accounts to boost and project the CCP’s narratives, including disinformation about where the virus originated. They also provide the perfect space for this collective of online actors to try to undermine critical reporting from Western media, research institutes and NGOs, and smear and harass researchers and journalists, whose work provided facts and analysis in stark contrast to the propaganda being disseminated globally by the CCP.
The Chinese state’s large-scale pivot to conducting information and disinformation operations on US platforms occurred across 2019 and 2020. As the pandemic spread, the Chinese state found itself ideally positioned to experiment with cross-platform and multi-language information activity that targeted overseas audiences and largely amplified Covid-19-related and political narratives.
But the efforts of the Chinese state lack the sophistication of others that engage in this online behaviour, such as Russia. For example, the Chinese state makes little effort to cultivate and invest in rich, detailed online personas, and it lacks the linguistic and cultural nuance needed to build credible fake influence networks. Despite this, the Chinese state’s information efforts are persistent. While it may have focused on quantity over quality thus far, given the enormous resourcing the CCP can bring to developing this capability, quick improvement can be expected. The Chinese state also brings an alignment in tactics and coordination—among diplomats, state media outlets, co-opted foreign fringe media organisations, and pro-CCP trolls and influencers—that no other state can match.
Stronger defence and models for collaboration
Covid-19 and the CCP’s efforts to control and shape international information flows about the pandemic through online propaganda and disinformation have made clear just how easy it is for malign actors to manipulate open information environments.
Harder choices will have to be made about how to protect our information ecosystems better and how to deter and impose costs on the many malign actors seeking to exploit it. This will require governments to work more closely with the platforms and civil society groups to ‘defend forward’ to counter and disrupt malicious information activity. There is also a lucrative market of influence-for-hire service providers, to which state actors can outsource propaganda distribution and influence campaigns to obfuscate their activities. These commercial actors are increasingly part of the fabric of political campaigning in many countries. However, the lack of transparency around these activities risks corrupting the quality of democracy in the environments in which they operate.
Globalisation and the openness of democracies make these acute challenges, as their openness has left democratic states vulnerable to threats of interference and subversion. Much of the thinking around cyber-enabled foreign interference has been framed by Russian meddling in the 2016 US election, yet other strategic actors are able to exploit disinformation campaigns and information operations in powerful combination with other levers of state power. China, for instance, has interwoven disinformation with its diplomatic and economic coercion of Australia in retaliation for the Australian government’s call for an independent international inquiry into the origins of the Covid-19 pandemic.
Given the cross-cutting nature of this challenge, diplomacy and policy are fundamental to pulling together like-minded countries to engage with and contest cyber-enabled foreign interference and the actors—state and non-state—that spread disinformation for strategic gain. Social media platforms have been a front line on this battlefield, and it is often the platforms that must detect and enforce against state-linked information operations and disinformation campaigns that exploit their users. Yet, the platforms are driven by different motivations from national governments.
Multilateral and multi-stakeholder approaches must be encouraged to facilitate the defence of democracy as a system of governance and values. This is particularly important in the arc of states from Japan down through Southeast Asia to India, many of which have fast-growing economies but fragile democracies, and where the Chinese state’s power projection has the potential to influence a long-term drift away from democratic governance.
There are models for collaboration between states in pushing back against interference. The European Centre of Excellence in Countering Hybrid Threats draws together expertise from across the EU and NATO to facilitate strategic dialogue on responding to hybrid threats, developing best practice, building capacity through training and professional development, and joint exercises. NATO Stratcom is another centre of excellence that combines both strategic and tactical expertise from across the alliance in collective defence against disinformation and information operations.
These models could be replicated through the Quad grouping of Australia, India, Japan and the US. The alignment of interests among these countries could provide an important vehicle for building structures like those that have been trialled elsewhere and offer resilience against cyber-enabled foreign interference. This should include multi-stakeholder 1.5-track engagement that brings together governments, civil society and industry; mitigates against the splintering of economic and national security interests; and drives greater investment in civil society capacity building around detection, strategic communications and digital diplomacy. Social media networks and search engines must do a better job at deterring and punishing actors that actively spread disinformation on their platforms and should audit what they categorise and promote as ‘news’.
Finally, there is strength in democratic collectives. Governments themselves can take steps to mitigate the risks of cyber-enabled foreign interference, but democracies can increase their power by banding together to attribute, raise costs and deter interference by other states. States targeted individually may be reluctant to escalate grey-zone aggression. However, where there’s a collective response, adversaries are likely to recalibrate their behaviour in the face of collective actions like diplomatic measures and economic sanctions.
This is an abridged version of Danielle Cave and Jake Wallis’s essay for the Observer Research Foundation’s 2021 Raisina Dialogue. For the full paper, including detailed policy recommendations, click here.
No comments:
Post a Comment