Modes of warfare have undergone several changes since the end of World War II. In the 21st century, battle lines become more complex as they now also use technology, as well as civilians as combatants in various types of cyber warfare. A recent report by the United States (US)-based social media analytics firm Graphika revealed a pro-Chinese political spam operation promoting a new and distinctive form of video content, which has been operating since 2019 on popular social media platforms like Facebook, Twitter, and YouTube. The content, termed spamouflage uses Artificial Intelligence (AI)-generated videos of fictitious persons to create deceptive political content. Frequent content churned out are along the lines of US inaction over gun violence in the US, and great power cooperation between the US and China. The report terms these operations as State aligned and political.
This development is significant not only from the innovation in technology viewpoint but also due to the future possibilities of rampant dissemination of disinformation and influence operations in the social media space. The question that arises in this context is what could be the possible ramifications of politically divisive disinformation? The 21st century is one in which social media connects almost everyone. Impressionable audiences form the civil society, which in democratic societies casts votes in favour or against political candidates in elections. Influencing the audiences against electoral candidates who may have an adversarial world view to certain countries that are churning out disinformation is a tool that is increasingly finding traction in the current world order. China, which has made great strides in moving up the technology ladder as well as in AI often resorts to tools such as disinformation, spamouflage, usage of bots, usage of the digital stack and so on to create a narrative that is favourable to its world view.
The usage of AI-generated content started from deepfake apps; which use deep learning algorithms for face-swapping. In September 2019, an app named Zao went viral in China, quickly becoming the most-downloaded free app on China’s Apple app store. It allowed users to superimpose images of their faces into video clips of celebrities in just a few seconds, using just one photo. While the app seemed benign, there was a catch, which led to an uproar among users. The app stated that Zao had completely free, irrevocable, perpetual, transferable and re-licensable rights. This meant that the app company can exploit the data generated by the users to perfect their deep learning models.
In November 2022 China released its ‘position paper’ on strengthening the ethical governance of AI, which emphasises that as the most representative disruptive technology, AI has brought uncertainty, giving rise to multiple global challenges and even fundamental ethical concerns, despite enormous potential benefits. More recently, on February 16, 2023, China was among the 60 countries which signed a non-binding ‘call to action,’ endorsing responsible use of AI in the military. However, as often is the case in most other domains, the ‘calls’ and ‘actions’ differ to a great extent when it comes to China.
In August last year, New Kite Data Labs—a US-based think tank, revealed that Beijing-based AI and data collection firm Speech Ocean, has collected voice samples from military sensitive regions of India, including Jammu & Kashmir, and Punjab. Speech Ocean is said to have worked with a New-Delhi based subcontractor who recruited individuals to record their voices in their languages and accents in lieu of small amounts of money. The report underlined that Speech Ocean is known to sell to the Chinese military, and the data collected from India was sold to agencies in China for use and analysis.
But as Chinese agencies collect datasets like voice samples from Indian regions, Beijing introduced new rules in January this year, increasing the government’s grip over the Chinese tech sector to regulate the use of deepfakes inside China. This shows the dual strategy of exploiting tech for geopolitical purposes and suppressing similar activities inside one’s own territory that can cause potential political turmoil.
As AI-related technologies advance, the dangerous possibilities of the usage of deceptive content for political and geopolitical gains will only increase. The fake anchors unearthed by Graphika can speak in multiple languages, given that they have matured voice sample datasets. While deepfakes can sow doubts, disrupt trust, further existing biases, and manipulate opinions and choices, AI-powered deepfake tools can wreak havoc by accelerating the process. With regards to India, there are numerous disinformation on Chinese social media as to how India is moving away from its democratic credentials or how it seeks to turn the entire South Asian region into a Hindu region. These types of disinformation then get tweeted by bots on popular social media channels. Information and awareness on how deep fakes work in India are low to begin with. India also has a big impressionable audience which relies on social media for information, which in turn shapes their belief systems. Thus, a greater amount of education on how social media can be divisive along with greater collaborations among democracies to study effects of State-controlled spamouflage becomes pertinent.
No comments:
Post a Comment