Pages

6 December 2016

What Is “Military Artificial Intelligence”?


By Brad Allenby

A soldier stands in front of an Unmanned Aerial System during an official 2013 presentation by the German and U.S. Unmanned Aerial Systems. 

We are in an era of existential fear of technology. Luminaries like Bill Joy, Elon Musk, and Stephen Hawking have warned against emerging technologies, especially artificial intelligence. Musk, for example, has warned that AI is “summoning the demons,” and Hawking has claimed that “[T]he development of full artificial intelligence could spell the end of the human race.” More than 20,000 AI researchers to-date have signed an open letter arguing that “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.” Human Rights Watch among others has launched a campaign against “killer robots,” while the U.N. is increasingly active in reviewing the technology.

This is an important discussion with many complexities, not least of which is the conflation of two potentially very different technologies: artificial intelligence and autonomous weapons. The terms themselves can be ambiguous—what, for example, does autonomous mean with reference to robotic systems? After all, a landmine responds to appropriate vibrations without human intervention, but we don’t call it autonomous. A more modern example, Samsung’s SGR-A1 Sentry Gun, deployed on the DMZ line between South and North Korea, has settings that allow it to shoot to kill without human intervention, “autonomous” vehicle technology is making rapid progress, and Google’s deep learning computer program AlphaGo beat world champion Lee Sedol at the Asian game Go, which is both much more complex and less structured than chess. 

These are extended techno-human cognitive systems capable of significant independent action (that is, not pre-determined by direct algorithm programmed by humans), but it is doubtful that anyone would attribute agency, free will, or consciousness to such systems. The extent to which a system may make decisions that are not directly programmed into it, and how those decision spaces are bounded to reduce any chance of unintended and inappropriate actions, is an important consideration. But this subtler issue is obfuscated, not clarified, by arguing about “autonomous” systems.

The biggest, perhaps fatal, ambiguity arises from the term that seems most solid—military.

Part of the confusion arises because the distance between the military and civilian spheres continues to grow, to the point where each is increasingly mysterious and inexplicable to the other. To understand how much the military has drifted away from mainstream civilian society, consider the demographics within Congress. In 1971, 72 percent of House members and 78 percent of senators were veterans. In the current 114th Congress, about 19 percent of members of the House and 25 percent of senators are veterans.

Get Future Tense in your inbox. 

But it isn’t just a question of a divide between civilians and the military. Changing geopolitical strategy and rapid technological progress are also conspiring to render the term military increasingly incoherent. The United States—the one power to emerge unscathed from World War II and ascendant from the Cold War, and the one power that invests the greatest amount of resources in its battle-hardened military—possesses such overwhelming conventional military superiority over anyone else that potential adversaries instead embrace asymmetric warfare. Thus, Gen. Valery Gerasimov, chief of the General Staff of the Armed Forces of Russia, wrote in a seminal article in 2014 that “the very ‘rules of war’ have changed. The role of nonmilitary means of achieving political and strategic goals has ... in many cases ... exceeded the power of force of weapons in their effectiveness.” He also spoke of the need for “the broad use of political, economic, informational, humanitarian, and other nonmilitary measures,” with conventional force “resorted to ... primarily for the achievement of final success in the conflict.”

This form of conflict, sometimes called hybrid warfare, was subsequently used successfully by Russia in its invasion of Ukraine. There, the challenge was not just to destabilize eastern Ukraine and occupy Crimea, but to do so without providing justification for a military intervention by NATO and the United States. While some aspects of hybrid warfare are familiar from Russia’s Cold War strategies, its current effectiveness relies on mastery of modern information and communication technologies.

China has moved in a similar direction. In another definitive work, this one titled Unrestricted Warfare, Qiao Liang and Wang Xiangsui of the People’s Liberation Army note that war is “transcend[ing] all boundaries and limits,” so that


all the boundaries lying between the two worlds of war and non-war, of military and non-military, will be totally destroyed. Warfare is in the process of transcending the domains of soldiers, military units, and military affairs, and is increasingly becoming a matter for politicians, scientists and even bankers.

China’s more traditional aggressiveness in the South China Sea is balanced by a strong and continuing cyberattack on American assets. To many in the United States, these are separate and independent challenges; in Chinese doctrine, they may well be part of an overall strategy of conflict against the ruling hegemonic power.

And then, of course, large areas of the world characterized by weak or failing countries are locked in what has been termed “neomedievalism” or, in Sean McFate’s words, “durable disorder,” characterized by no stable source of authority and a constant state of low-level violence where conventional military technology, while sometimes necessary, is inadequate for long-term success (or conflict management, which is probably the best you can get). In these areas, greed, cultural values, and identity fuel conflict.

On all fronts, potential adversaries are redefining conflict, treating entire cultural landscapes as battlespace, using weapons appropriate to the domain. Financial attacks are used to sap the strength of adversaries: Retired Army Gen. Keith Alexander, the National Security Agency director and CYBERCOM commander, has said that the ongoing cybertheft of American industrial data, much of it by China, “is the greatest transfer of wealth in history.” Russia’s apparent release of stolen emails via WikiLeaks is targeted at U.S. presidential politics, maybe less to influence the electoral process than to portray American democracy as chaotic and degenerate. What all of this means is that the term military, while still descriptive of a limited set of activities, is no longer adequate to describe the overall framework of state-to-state or civilizational conflict, or the technologies with which those conflicts are fought.

And so it follows that “military AI,” as distinct from any other form of AI, is also an increasingly incoherent term. Is a new technique like the blockchain “military AI”? Is attacking Sony or Facebook or Yahoo and stealing corporate data, or emails, a “military” activity? Is destabilizing regions or states through use of sophisticated postmodernist disinformation campaigns a “military” activity? In the Napoleonic era, “military” was indeed a clearly separate domain; today, it is a smaller part of a much more complex pattern. And while some technologies are clearly “military”—tanks, for example, or howitzers—it is less apparent how to categorize most emerging technologies, from biotech to ICT.

Any legal regime that that even implicitly relies on already obsolete definitions of military will be outdated and incoherent. Attempts to limit “military” AI will fail if most of the systems being used in modern conflict, such as those increasingly powering integrated cyberattacks on U.S. infrastructure, data systems, and theft of intellectual property, are not associated with traditional military technologies, strategies, or tactics. Moreover, restrictions or bans that only affect “military” technologies will necessarily have a differential impact on the power of the state that relies most heavily on those technologies—in this case the United States—and far less impact on powers that increasingly rely on asymmetric technologies and strategies which are not understood as “military.” Whether this differential impact on the U.S. is intended or not by the participants in such policy dialogs is unclear and somewhat irrelevant.

The debates around lethal autonomous robots and military AI are more than simply disagreements about poorly defined technology categories. Instead, they illustrate that traditional ideas about conflict, power, and security may be obsolete or even dysfunctional.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

No comments:

Post a Comment