Pages

25 July 2023

ChatGPT is creating new risks for national security

Christopher Mouton

Large language models like ChatGPT and Claude offer a wide range of beneficial applications. However, there are significant risks associated with their use that demand a coordinated effort among partner nations to forge a solid, integrated defense against the threat of malign information operations.

Large language models can assist in generating creative story plots, crafting marketing campaigns and even creating personalized restaurant recommendations. However, they often produce text that is confidently wrong. This design has profound implications, not only for routine use of artificial intelligence, but also for U.S. national security.

AI-generated content can exhibit a phenomenon known as “truthiness” — a phrase coined by television host Stephen Colbert in the early 2000s to describe how information can feel right. This concept emphasizes that, despite lacking factual accuracy, content with a highly coherent logical structure can influence how smart, sophisticated people decide whether something is true or not.

Our cognitive biases mean well-written content or compelling visuals have the power to make claims seem more true than they are. As one scholar who has studied “truthiness” describes it: “When things feel easy to process, they feel trustworthy.”

Adversaries of the U.S. can manipulate the potential for AI models to sound “truthy” — crafting coherent, well-structured and persuasive sentences, which can mimic human writing — to gain an advantage. The internet, with its global reach, has created a potent medium for foreign interference through subversive incursions of truthiness.

State actors are leveraging digital technologies to execute hostile information campaigns, using online tools and information operations to promote their interests. State actors can manipulate cognitive fluency bias and truthiness to shape the sociopolitical arena, expanding the potential misuse of AI-driven language models for malign information operations, large-scale spear phishing campaigns and increasingly believable deepfake media.

The exploitation of cognitive fluency bias in information operations can give misinformation a deceptive veneer of credibility, contributing to the destabilization of political systems and of societal coherence.

Brad Smith, the president of Microsoft, said during a May 25 speech the U.S. will “have to address in particular what we worry about most [from] foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians.”

Russian hackers’ crude deepfake attempt to depict Ukrainian President Volodymyr Zelenskyy calling for Ukrainians to lay down their arms was easily identifiable. But it’s now evident the issue is no longer academic. The reality looms that malign videos, forged documents and counterfeit social media accounts could transition from being seen as crude to conveying a deceptive semblance of authenticity.

A surge in subversion campaigns dependent on disinformation and the heightened efforts by malign actors underscore the necessity for a strategic response from the U.S. and its allies. To combat these trends, U.S. policymakers need a comprehensive strategy, built upon vigilant monitoring, proactive warnings and international collaboration.

The cornerstone of this strategy should be the vigilant monitoring of the information environment. Strengthened by advancements in AI and machine learning, continuous monitoring is vital for the early detection and neutralization of disinformation campaigns.

This stance echoes the U.S. Defense Department’s call for an improved capability to “monitor, analyze, characterize, assess, forecast, and visualize” the information environment, as detailed in its 2016 “Strategy for Operations in the Information Environment.” To implement this, military and intelligence agencies will need specialized units dedicated to information warfare, which can provide the crucial expertise needed to interpret and act on the collected data.

The U.S. government also needs a robust warning system that simultaneously promotes truth and exposes disinformation. Timely and effective warnings can help protect the public from false narratives, significantly curtailing the impact of disinformation campaigns. The power of truth is a formidable tool in this context; it serves as an effective countermeasure against the corrosive effects of falsehoods.

Lastly, there is a need to reinforce strategic partnerships with international allies. The global nature of malign disinformation campaigns mandates that counter-efforts be equally far-reaching. These partnerships provide invaluable local knowledge and foster trust, both of which can greatly enhance the credibility of warnings and bolster societal resilience against disinformation.

Christopher Mouton is a senior engineer at the think tank Rand and a professor at the Pardee RAND Graduate School.

No comments:

Post a Comment