27 August 2024

AI in Precision Persuasion. Unveiling Tactics and Risks on Social Media

Gundars Bergmanis-Korāts, Tetiana Haiduchyk & Artur Shevtsov

Introduction

The past decade has seen extraordinary developments in various deep machine learn-ing techniques in the field of generative AI. This has allowed the creation of sophisticated AI models capable of generating textual, auditory, and even visual content. In today’s landscape, the market-driven hype surrounding these models has skyrocketed. New potential use cases for them are being discovered in almost every sphere of human affairs, allowing us to observe the real consequences of this wide adoption of the usage of AI models. Previous research1 has shown that this rapid advance of AI presents significant opportunities, such as identifying hostile communications. However, it also entails substantial risks, including the generation of deep fakes and other manipu-lative content on social networks, which can be used to disseminate disinformation. A clear example of the increasing risks is the tenfold rise in the proportion of tweets by pro-Kremlin hyperactive anonymous ‘troll’ accounts in 2023 compared to the first months of Russia’s invasion of Ukraine.2 The application of AI models by adversaries for conducting global persuasion operations, which can lead to AI incidents,3 forces us to stay vigilant and ready to counter these threats effectively.

The most illustrative example is the impact of LLMs on text generation. Models such as GPT, Gemini, and Claude are trained on a vast corpus of data collected from diverse information sources ranging from news articles to code repositories and non-public databas-es. These models have performed well4 on a wide variety of content generation or editing tasks, which has seen them quickly adopted by users across various categories and modalities. Consequently, concerns about disinformation, security risks, and dissemination of biased or low-quality information across the web have begun to emerge. Recent research published in Nature demonstrated that using online search to verify potentially false news may actually increase belief in it, particularly when search results prioritize low-quality sources.5 Such phenomenon is overly concerning in cases where AI-generated content is flooding the web and being indexed by search engines. In the visual domain, AI models, mainly focused on image generation due to video development being at an early stage, show a similar trend. Leading models like Stable Diffusion, DALL·E, and Midjourney exhibit impressive results in image quality, notably in digital art for their creativity and expressiveness.

No comments: