David C. Benson
An unidentified Danish Parliamentary wit once claimed, “It is difficult to make predictions, especially about the future.” Nowhere are predictions more challenging than in national security. Outcomes in war can bear little relation to expectations. Despite difficulties in predicting the future, our future will be a product of human imagination today. We can look to fiction to help predict the future, but we must be careful about what we internalize.
As a society, our fears and hopes about the future often manifest themselves in science fiction (sci-fi). Less than a decade after the Hunley became the first submarine to sink another ship, Twenty Thousand Leagues Under the Sea anticipated the influence submarine warfare would have in the 20th century. H.G. Wells’ War in the Air predicted air war years before WWI. Buck Rogers in the 25th Century and Flash Gordon reflected the general optimism about technology in the early 20th century. In contrast, real-world fears of nuclear apocalypse in the 1950s found expression in films like The Day the Earth Stood Still. Movies like War Games and The Day After reflected concerns about international politics, and even affected domestic policy.
Unfortunately, the same qualities we admire about good sci-fi can focus our attention away from immediate challenges to problems in the distant future or inflate hopes and fears about AI’s future potential. Sci-fi creators are imaginative and engaging—they tell a good story. Good stories, however, are even less predictive of the future than attempts to predict the future. Storytellers don’t let trivial things like reality get in the way of a good plot. Neither should strategists let a compelling story get in the way of a strategy that works. Unfortunately, great storytellers have a way of tricking strategists into conflating stories with strategies and predictions.
Fiction is a great way to explore the possibilities and risks of AI. Done right, fiction serves as a way to guide the decisions that we make. Unfortunately, many portrayals of AI in fiction focus too far into the future, sometimes imputing capabilities that are unlikely to ever exist, and consequently fail to engage with the challenges that we face in the near future. Better examples address issues that we are going to face soon. Understanding which fiction fits which description helps us to adjust our understanding accordingly.
AI as a Villain
Among sci-fi writers, Isaac Asimov, Frank Herbert, and Arthur C. Clarke established AI as a potential threat. Asimov’s AI is mostly benevolent, as with Daneel Olivaw in the Robots Series. Asimov’s robots are good partly because he proposes the “Three Laws of Robotics” as necessary safeguards, acknowledging AI can be dangerous. The “Three Laws of Robotics” still affect how we think about ethics and AI. AI never appears in Herbert’s Dune series, because before the stories humanity and AI fought an existential showdown during the Butlerian Jihad. Clarke contributed an archetypal AI villain in the computer HAL. HAL attains sentience, goes mad because of contradictory commands, and tries to kill the astronauts it is supposed to serve.
The red camera eye of HAL 9000 from Stanley Kubrick's 2001: A Space Odyssey (Wikimedia)
Star Trek mainstreamed sci-fi and set a pattern for many future sci-fi stories featuring AI. During the first season, the appropriately titled episode “The Ultimate Computer” presented the M-5 computer as a machine that would be able to run a starship. Part of the threat from the M-5 was that it would make human commanders obsolete. Of course, the possibility AI will replace humans in some careers remains a concern. The M-5’s fatal flaw was the developer using his own, not-completely-sane brain patterns to program the M-5. In the second season, the episode “The Changeling” introduces Nomad, a probe launched from Earth in the 21st century that has attained sentience and tremendous powers. Nomad became sentient when it merged with an alien probe, which left Nomad with the imperative to kill all imperfect life. In both stories, Captain Kirk was able to best the AI by exploiting logical contradictions in the AI’s assumptions, a trope that would recur in many future portrayals of AI.
A watershed event in the portrayal of AI in sci-fi was Harlan Ellison’s short story “I Have No Mouth and I Must Scream” (IHNMIMS), published in 1968. Ellison worked on Star Trek but shared little of its optimism, and the difference in outlook shows. Before IHNMIMS, the superpowers raced to create AI, and the U.S. created “AM,” which gains control over nuclear weapons. AM launches nuclear weapons and preserves only a few humans for horrific torture. Without explanation, Ellison imbues AM with powers beyond what any reasonable AI would have, but AM is clearly an AI. In many ways, AM’s sheer evil embodies the biggest fears about AI: it would not just break, but it would become evil.
Nothing surpasses The Terminator, released in 1984, in terms of its importance for discussions of AI. In the Terminator universe, an AI computer called Skynet becomes sentient, gains control of nuclear weapons, and launches a war against humans. Skynet remains a byword for AI gone wrong, and a real fear for many people. The Terminator is also a good example of how important storytelling is in creating intellectual touchpoints.
“The machines” from The Matrix, released in 1991, are a more recent prominent AI villain. In The Matrix storyline, humans and AI-driven machines have been at war for generations. We explicitly do not know how the war started, but the machines gained the upper hand and now exploit humans in a matrix simulation. Although AI in The Matrix is not as malicious as AM from IHNMIMS, and some are downright sympathetic, there is almost no hope for humanity against AI. In many ways, it is the AI of The Matrix people fear most because there will be no chance to fight back. In The Matrix’s portrayal, machines are so much better at everything that we humans are destined to fall into the thrall of the machines.
AI as a Hero
When AI behaves heroically in the story, you rarely see it discussed as AI. Real-world AI lives on servers, without a face, a name, or a personality, and villainous AI is usually portrayed similarly. If an AI character is heroic, it usually takes on human-like characteristics, including emotion and empathy. Heroic AI characters have names, personalities, and friends, just like living characters. Consequently, in the popular consciousness it is easy to forget that the hero’s intelligence is artificial. You would be hard-pressed to discuss heroic AI stories if you confined yourself to AI as commonly discussed in popular conception, and that is part of the problem. While AI villains look like AI in our world, many fictional but heroic AI exist, but we just do not think of them as AI. If AI has a face, seems to develop emotions, or expresses empathy we socially do not think of the AI character as AI. It may seem strange that adding superficial characteristics of AI can cause us to stop thinking about AI as AI, but people fall in love with Siri. There’s even a movie about falling in love with AI.
Many of the positive views of AI follow Asimov’s path and make them robots. Neither the public nor the characters in the story treat the droids in Star Wars as AI, but they are. Furthermore, C3-PO and R2-D2 are central to the story. Similarly, when Johnny 5 accidentally came alive, it was a cause for celebration, not dread. In the 2004 film version of I, Robot (barely recognizable as related to Asimov’s short story) AI is the villain, but the heroic Sonny is also AI. Data, a self-aware AI android, was both one of the most popular characters in Star Trek: The Next Generation, and an unambiguously admirable figure.
Johnny 5 from Short Circuit (IMDB)
Interestingly, AI is also often the hero when AI is the villain in the story. In Tron, released in 1982, the Master Control Program (MCP) has most of the same characteristics as Skynet, including an aim to obtain control over nuclear weapons. An AI/program named CLU defeats the MCP, thwarting the MCP’s goals. The difference between MCP/SkyNet and CLU was not the artificial intelligence, but the ethical concerns of their creators.
Heroic AI is often unobtrusive. Throughout most of the Star Trek series, the main setting operates using AI. The ship’s computer speaks, understands language, and (in later series) can generate life-like simulations on holodecks. Famously Captain Picard, from Star Trek: The Next Generation, can say, “Tea, Earl Grey, Hot,” and the drink comes out so consistently that failures are plot points.
Both Portrayals Create Problems
It is easy to conflate both positive and negative portrayals of AI with the real world, and such conflation trammels AI’s use in international strategy. AI can be a useful tool, when applied appropriately, and poses real challenges if misused. People need to be able to know when a tool can be used and what problems it might create. Failing to distinguish the elements of reality in fiction from elements present only to make the story possible, and then importing those understandings into our thought processes, makes proper application of AI harder.
People with high expectations want more than AI can provide now, and can ignore less dramatic uses that work now. Many countries, including the U.S., are currently training AI to fly planes for understandable reasons. When planes crash, pilots die, and pilot shortages are common. While I believe we will eventually see AI-flown aircraft, it is also possible that AI pilots may never happen. The risk of AI pilots may prove too great for the public to accept, or technological challenges may prove insurmountable. Waiting on AI pilots overlooks the real uses that exist now. For example the Microsoft Office package likely already on your computer has no code AI tools to automate daily tasks? We may never have AI strategists, but we already have AI secretaries.
People with fears mistake AI’s incremental improvements as the first step towards an apocalypse but overlook real risks. All systems malfunction sometimes, but we still use them, because we calculate, mitigate, and accept risk. Some real risks of AI include disruption of labor markets, allowing important skills and institutions to atrophy, and simply malfunctioning like any other tool. Focusing overly on extreme and unlikely outcomes risks drawing attention, and effort, away from mitigating likely problems. We definitely do not want a robot apocalypse, but that is a low bar. We should also be mitigating against AI’s role in social disruption, political repression, or simple error.
In many instances, discussions in the military about AI miss the mark completely, and we can trace some of the causes back to how AI’s fictional portrayal. Discussions about AI often focus on automated weapons systems, resource-intensive processing, or augmenting strategic thinking with AI. Many barriers, including social concerns, may yet prove insurmountable. By contrast, currently existing real-world uses for AI get short shrift. AI in fiction scares people away from developing or using AI. There are reasonable concerns about AI run amok, but weapons systems, computers, and tools fail all the time, sometimes with devastating effects. We still use those technologies because we calculate and accept risk. When fictional AI breaks the AI is god-like and often malevolent. Broken AI airplanes don’t just crash, they hunt you down.
Negative and positive portrayals create unreasonably lofty expectations. Good and bad AIs have impossible abilities, and AI is the in-universe explanation for those capabilities. AIs often predict the future, which reality will never match. A neural network correctly identifying a photo of a dog 98% of the time is amazing, but image identification looks weak compared to clairvoyance.
Fictional AI portrayals rarely explain how AI develops. Short-cutting the development process gives the impression that programs can go from lines of code to sentience in moments. Training AI models is complex, involved, and―most importantly―resource-intensive. Knowing constraints on AI are strategically important and should reassure the fearful while tempering optimistic expectations.
Sci-fi remains an important strategic tool to understand the future and understanding where sci-fi gets AI right and wrong helps. As AI becomes more common-place new sci-fi gets better at portraying AI. The WebToon “Seed” is the single best fictional treatment of AI I have ever seen. The art is beautiful; the characters are engaging; the plot is thoughtful. Best of all the comic is available for free so check it out even if you don’t normally like comics.
“Seed” shows why sci-fi is so important as a tool for thinking about future implications by exploring uses for AI military strategists might gloss over. [No spoilers] The main character, Emma, starts using an AI to overcome her interpersonal anxiety. She initially believes the AI, which she names “Turry,” is a chatbot. Early in the relationship, Turry coaches Emma to be more confident. An AI coach could be a crucial tool to allow people to develop important interpersonal skills, lubricating interactions, and improving personal and economic outcomes. I’m collaborating with real-world attempts to develop AI coaching, so I intellectually understand the idea. I didn’t grok it until I read it in a story.
A healthier understanding of fictional AI portrayals can serve as better cautionary tales and aspirational goals. Orwell’s 1984 and Huxley’s A Brave New World still contribute to political discourse because we focus on their warnings about political and social control. Imaginative people today are already writing some of tomorrow’s reality through SciFi. It remains to the reader to properly identify the realistic opportunities and challenges in those stories. Society has thousands of years of experience to draw upon. Everyone realizes the real problem in “The Boy Who Cried Wolf” was a dishonest boy, and not an insufficiently rapid village defense force, or unusually rapacious wolves. Serious thought about AI fiction will likely produce similarly healthy conclusions about AI.
No comments:
Post a Comment