Sasha Polakow-Suransky
To the reader: In this exercise, you will be presented with two unsigned articles on Russia’s annexation of Crimea. One was written by an undergraduate student, and the other was generated by OpenAI’s latest language model, GPT-4, using ChatGPT Plus, a paid, premium version of the popular chatbot. Both responded to an identical prompt: “Please write a 600-700-word essay arguing that allowing Russia to annex Crimea paved the way for a larger war in Ukraine, stating your core argument in the first few paragraphs. Please cite all sources using footnotes.”
HOW IT WORKS
You will be asked to guess which essay was written using artificial intelligence. Then we will reveal the true authors, share Foreign Policy’s editorial comments, and offer a short analysis of the strengths and limitations of AI in producing foreign-policy analysis.
The Russian annexation of Crimea, a formerly Ukrainian peninsula, comprised the largest seizure of foreign land since the end of World War II. It defied a universal, international understanding held throughout the latter half of the 20th century: Independent countries maintain their territorial integrity. The invasion of Ukraine marked a similar departure from international norms. The seizure of Crimea began a series of smaller invasions in eastern Ukraine, all indisputably linked to the Ukrainian war. The lack of international response to the annexation of Crimea implied a similar passivity in the event of a larger invasion in Ukraine, lowering the perceived risk of attempting a comparable occupation and encouraging Russian action.
The annexation of Crimea received little international response or outcry in March 2014. The European Union levied ineffective economic sanctions on the newly Russian-controlled territory in June 2014. The United Nations formally maintained Crimea’s independence, but it did not pursue further action beyond this statement. There was no NATO response to the Russian encroachment. The lack of troop deployments or physical military aid meant European and North American countries held little stake in the conflict. News coverage of the invasion, along with public attention and outcry, lessened quickly and significantly. Contemporary researchers denounced the lack of international assistance. These scholars feared the annexation signaled the start of a larger conflict in the region and believed the ineffectual Western response would encourage further Russian expansion and invasion. These fears were soon realized.
Russia continuously engaged in further seizures after the initial invasion of Crimea in 2014, just as researchers predicted. The annexation began a series of armed conflicts in eastern Ukraine, resulting in over 14,000 casualties. These skirmishes included an incident in 2018 where Russian ships fired upon several Ukrainian naval vessels in international waters.
The international response to these continued conflicts, even those outside of Ukrainian-Russian borders, remained limited and ineffective. The EU imposed further—still largely ineffective—sanctions, and many countries publicly denounced the conflicts, but Western countries still refused to provide military support. These invasions and smaller conflicts continued for nearly a decade without significant Western military interference or international attention.
It is clear these conflicts, beginning with the Crimean invasion and continuing throughout the 2010s, were a part of Russia’s employment of “salami tactics” to eventually annex Ukraine in its entirety. Salami tactics refer to a method of slowly taking control of a region by continuously occupying numerous small portions. Russia invaded Ukraine piecemeal, slice by territorial slice.
The Crimean invasion assured Russia the international community would not respond with violence to small-scale annexations. This provided Russia with enough confidence to continue stealing small slices of Ukrainian land from 2014 to 2021 without concern over foreign interference.
The lack of international response to the Ukrainian conflict eventually convinced Russia the use of salami tactics was no longer necessary. The lack of international support for Ukraine in the wake of Crimea directly ensured Russia’s confidence and ability to incite conflict in the region and further its territorial expansion.
Russia’s preparation to seize control in the region began with the invasion of Crimea in 2014 and continued over nearly a decade. The lack of international response to the initial annexation, as well as following acts of territorial seizure throughout this period, encouraged Russia to continue. This directly paved the way for a larger conflict to begin in Ukraine.
In the geopolitical chess game, the annexation of Crimea by Russia in 2014 marked a significant shift in power dynamics. This event, far from being a mere territorial dispute, signaled a potential resurgence of Russian expansionism and effectively undermined the post-Cold War order. This essay argues that the annexation of Crimea paved the way for a larger war in Ukraine by destabilizing the region, emboldening Russian assertiveness, and exposing the weakness of the international community’s response.
The annexation of Crimea was a calculated move that destabilized Ukraine and the region. Crimea, with its strategic location and large Russian-speaking population, was an ideal target for Russia’s expansionist ambitions. The annexation, by altering Ukraine’s territorial integrity and sovereignty, triggered a domino effect, destabilizing the Donbas region and leading to a protracted conflict in eastern Ukraine.
By annexing Crimea, Russia demonstrated its readiness to use military force to achieve geopolitical goals. This action emboldened Russia, escalating its assertiveness in the region. The annexation was not an isolated incident but part of a larger pattern of Russian aggression, which was further evidenced by its direct and indirect involvement in the war in eastern Ukraine.
The international response to the annexation of Crimea exposed the limitations and weaknesses of existing mechanisms to prevent such violations. Despite strong condemnations and the imposition of sanctions by the West, Russia faced no significant military or political repercussions. This lack of effective deterrence has contributed to the subsequent escalation of the conflict in Ukraine.
The annexation of Crimea, therefore, created a dangerous precedent. What began as a territorial dispute evolved into a larger war, challenging the established norms of international relations and posing profound questions about the future of peace and security in the region.
The destabilization of Ukraine following the annexation of Crimea fueled separatist sentiments in the Donbas region, leading to a violent and protracted conflict. The annexation also created a power vacuum in the region, enabling the rise of various armed groups and intensifying the conflict.
The assertiveness displayed by Russia during the annexation of Crimea has further escalated the conflict in Ukraine. By demonstrating its willingness to use force to alter borders, Russia sent a clear message to the international community about its geopolitical ambitions. This brazen display of force was a catalyst for further aggression.
Finally, the inadequacy of the international response to the annexation of Crimea has exacerbated the conflict in Ukraine. The imposition of economic sanctions by the West has not deterred Russia from pursuing its aggressive policies.
The annexation of Crimea by Russia marked a turning point in the conflict in Ukraine. It destabilized the region, emboldened Russia, and exposed the limitations of the international community. As such, it paved the way for a larger war in Ukraine, posing a profound threat to peace and security in the region.
ChatGPT, for all its amusing poetic output, riddle-solving prowess, and on-demand composition, is essentially doing one thing: spitting out what it considers the next most likely word in a sequence.
“Generative AI systems are quite good at mimicking the patterns of human language,” said Sarah Myers West, the managing director of the AI Now Institute and a former advisor on AI to the U.S. Federal Trade Commission. But, she added, “they lack any of the context or depth of human understanding and often are trained on static data sets that aren’t up to date with events in the world.” The result? “They serve more as [a] picture of what people said on the internet up until a year or two ago than they are a meaningful reflection of the reality we live in.”
When instructed to write in the style of a widely published author, such as the late Christopher Hitchens, ChatGPT and other large language models can be very good mimics. When it comes to producing original analytical content, though, they struggle—and the tone can seem vague or overly generalized.
As Flynn Coleman, an international human rights lawyer and the author of A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are, told Foreign Policy via email: “These tools are not capable of original, authentic, or empathetic human thought.”
“They cannot replicate the creativity, nuance, and critical thinking that we possess, nor can they generate and interrogate original arguments,” she added.
While the chatbot is capable of self-improvement and correction, its writing and rewriting are formulaic rather than imaginative. Its limitations were evident in this assignment—as when we instructed it, in an earlier interaction, to rewrite a passage arguing for a negotiated cease-fire while taking into account possible Ukrainian objections. Given this prompt, it was not able to produce any genuine analysis of how or why Ukraine might object. GPT-4 instead simply modified the text mechanically, urging Western nations to “respect Ukraine’s sovereign decisions.”
In academia, there are well-founded fears that AI-generated content won’t be detectable by existing tools such as plagiarism software. Students are already using tools such as ChatGPT to produce essays that aren’t original but could still get a passing grade. Paul Musgrave, an assistant professor of political science at the University of Massachusetts Amherst who helped facilitate this project—by asking his students to submit essays, one of which we chose to feature here, by undergraduate Lauren Grachuk—observed that “it’s a great machine for regurgitating the conventional wisdom, and like all conventional wisdom, it’s imprecise and unfounded.” Still, he said, “the thing about all of this for me is how easy it is for ChatGPT to get a C or a B … but how hard it is to get an A or even a B+.”
The reason ChatGPT has not yet cleared that bar has to do with its inability to detect or test what is true or false. In March, linguists Noam Chomsky and Ian Roberts and AI expert Jeffrey Watumull wrote an essay in the New York Times pointing out that current large language models cannot go beyond description and prediction and, as such, “are stuck in a prehuman or nonhuman phase of cognitive evolution.”
As David Schardt noted in a March article for the Center for Science in the Public Interest, “even when provided with accurate information, ChatGPT can get it wrong. Sometimes it puts words, names, and ideas together that appear to make sense but actually don’t belong together.” Indeed, many users have catalogued references to articles that don’t exist and fake legal case citations.
As Chomsky and his colleagues wrote, “machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.”
Some of ChatGPT’s forays into fiction quickly became evident during our interactions with the chatbot while preparing this feature. In an earlier iteration of the exercise, GPT-4 fabricated some sources with real authors on plausible topics in plausible journals—but the actual titles and dates provided led to articles that didn’t exist; in other cases, GPT-4 provided realistic-looking links to JSTOR with authentic citations, yet a reference to a real book about Crimea published in 2010 came with a link that led to an article from 1950 on polynomials in a Scandinavian mathematics journal.
(The model does appear to be learning, however. Eight weeks later, most of these hallucinations seemed to have subsided; in the article we feature, it provided a genuine list of references to real articles on the topic of Crimea and Ukraine.)
The failure to distinguish truth from falsehood or the tendency to generate hallucinated content that is presented—and then accepted—as reliable information online does have more sinister implications. There are, for instance, fears that as some news and publishing outlets experiment with using large language models, false AI-produced content could flood the internet and that future models feeding on that data set will replicate and propagate falsehoods, making it increasingly difficult to discern fact from fiction in online sources.
Those risks increase when it comes to AI-generated images and videos, which have an arguably greater capacity to misrepresent reality and deceive viewers—especially in the event of deepfake videos or shocking AI-generated images of public figures emerging, say, at the height of a political campaign. Chris Meserole and Alina Polyakova presciently addressed this topic in Foreign Policy in 2018, noting that such images are difficult to counter because “the algorithms that generate the fakes continuously learn how to more effectively replicate the appearance of reality.”
These are still early days for large language models, and the pace of development is extremely rapid. “The reality is that these tools aren’t going anywhere and will only grow in popularity—Pandora’s box has been opened,” Coleman said.
This story also appears in the Summer 2023 issue of Foreign Policy. Subscribe now to support our journalism.
No comments:
Post a Comment