by Linda Slapakova
Disinformation has become a defining feature of the COVID-19 crisis. With social media bots (i.e., automated agents engaging on social networks) nearly twice as active during COVID-19 as opposed to past crises and national elections, the public and private sectors have struggled to address the rapid spreading of false information about the pandemic. This has highlighted the need for effective, innovative tools to detect and strengthen institutional and societal resilience against disinformation. The leveraging of Artificial Intelligence (AI) represents one avenue for the development and use of such tools.
To provide a holistic assessment of the opportunities of an AI-based counter-disinformation framework, this blog firstly discusses the various roles that AI plays in counter-disinformation efforts. Next, it discusses the prevailing shortfalls of AI-based counter-disinformation tools and the technical, governance, and regulatory barriers to their uptake, and how these could be addressed to foster the uptake of AI-based solutions for countering disinformation.
The Double-Edged Sword of Emerging Technologies and Disinformation
Emerging technologies, including AI, are often described as a double-edged sword with relation to information threats. On the one hand, emerging technologies can enable more sophisticated online information threats and often lower the barriers to entry for malign actors. On the other hand, they can provide significant opportunities for countering such threats. This has been no less true in the case of AI and disinformation.
AI provides various opportunities for strengthening responses to increasingly sophisticated and democratised disinformation threats.Share on Twitter
Though the majority of malign information on social media is spread by relatively simple bot technology, existing evidence suggests that AI is being leveraged for more sophisticated online manipulation techniques. The extent of the use of AI in this context is difficult to measure, but many information security experts believe that AI is already being leveraged by malign actors, for example to better determine attack parameters (e.g., 'what to attack, who to attack, [and] when to attack'). This enables more targeted attacks and thus more effective information threats, including disinformation campaigns. Recent advances in AI techniques such as Natural Language Processing (NLP) (PDF) have also given rise to concerns that AI may be used to create more authentic synthetic text (e.g., fake social media posts, articles, and documents). Moreover, Deepfakes (i.e., the leveraging of AI to create highly authentic and realistic manipulated audio-visual material) represent a prominent example of image-based AI-enabled information threat.
AI also provides various opportunities for strengthening responses to increasingly sophisticated and democratised disinformation threats. Aside from increased accuracy with which AI models can detect pieces of false or misleading information or recognise the tactics used by social media bots in spreading disinformation (e.g., the rhetorical tools they adopt), AI models also represent more cost-effective avenues for countering disinformation by reducing the time and resources needed for detection. AI-based solutions have been adopted, for example, to identify social media bots through automated (PDF) bot-spotting or bot-labelling (i.e., detecting and labelling fake social media accounts). The pace and scale of disinformation on social media is also increasingly challenging the reliance on manual fact-checking, indicating that a level of automation may indeed be required to effectively address the social media-based disinformation challenge.
AI can also play an enabling role in efforts to foster wider institutional and societal resilience to disinformation. This can include the integration of AI-based detection into toolkits and applications that flag false or misleading content on social media to users while also educating them about various information manipulation techniques, thus advancing digital literacy.
Challenges in AI Performance and Uptake of AI-Based Capabilities
While AI presents an incredibly potent resource for countering disinformation, there are several challenges and barriers which have thus far limited the uptake of AI-based counter-disinformation tools.
First, the technical limitations of current AI models for detecting new pieces of disinformation have meant that AI-based detection remains largely limited to hybrid, human–machine approaches in which human fact-checkers identify a piece of disinformation and only thereafter is an AI model used to detect variations (i.e., identical or similar posts) of disinformation. Facebook has, for example, made use of such an approach in efforts to combat COVID-19 disinformation. The development of effective AI models that are themselves able to detect novel pieces of disinformation thus remains technically challenging and time- and data-intensive.
Potential algorithmic bias, a lack of algorithmic transparency, and interpretability are significant challenges for wider AI adoption in counter-disinformationShare on Twitter
Second, potential algorithmic bias, a lack of algorithmic transparency, and interpretability have also been highlighted as significant challenges for wider AI adoption in counter-disinformation. While investment in the field of Explainable Artificial Intelligence (XAI) (PDF) has increased significantly, many AI models are still constructed as 'black box' solutions (PDF), i.e., models which do not allow developers to fully interpret the process through which an AI model works. This has produced concerns that AI models may systematically reproduce biases (PDF) (e.g., gendered or racial biases) from the data on which they are trained. Combined with the aforementioned technical challenges, this has sparked concerns as to the potential risks of AI-enabled content moderation for freedom of expression and content plurality through false positives—i.e., situations in which AI models mistakenly flag legitimate content as false or misleading.
Third, institutions frequently lack the institutional capacity to leverage the potential opportunities provided by AI-based tools in counter-disinformation. Such limited institutional capacity can, for example, include a lack of technical expertise to manage AI models, interpret their outcomes, and understand their wider policy and governance implications.
Priorities for Creating an AI-Based Counter-Disinformation Framework
The myriad opportunities for leveraging AI to counter disinformation may require stakeholders to consider actions for addressing the above-described challenges and barriers through regulatory, technology-oriented, and capacity-building measures. There are three key priorities towards which these measures could be oriented:
Government stakeholders could engage with platforms and technology developers to prioritise technology development towards strengthening the ability of AI models to recognise contextual nuance in social media discourse and adapt more rapidly to recognise novel pieces of disinformation. As RAND Europe's previous research highlighted, linguistic stance technologies can provide significant opportunities in this context through enhancing AI-based detection models by analysing potential false or misleading information in the context of the wider rhetorical battlefields of social media discourse.
The development of new technical, AI-based approaches for countering disinformation could be sufficiently 'future proof' in considering the potential impacts of an AI-based counter-disinformation framework on digital human rights such as freedom of expression online. It could also regard the adoption of AI as an enabler of a more comprehensive response to disinformation, rather than an isolated, overly technology-centric solution. Future efforts could therefore also focus on fostering societal resilience to information threats through digital literacy. This could include strengthening the understanding of social media users of the potential impacts of technologies such as AI on social media content and strengthening their ability to recognise malign information while engaging in informed discourse with others.
The integration of AI in counter-disinformation frameworks could go hand in hand with comprehensive organisational capacity-building. The adoption of more shallow but interpretable models can, for example, foster institutional capacity for using AI-based disinformation detection models. Beyond detection, institutions particularly in the public sector might explore specialised AI training for technical personnel to be able to leverage innovative AI-based solutions for countering disinformation.
No comments:
Post a Comment