BY PAUL SCHARREPAUL SCHARRE
Scores of countries are gathering at the United Nations this week to discuss lethal autonomous weapon systems – essentially, robots that would pick their own targets. This marks their fourth year of debate with little to show for it; the group does not even have a shared working definition of “autonomous weapon.” Meanwhile, the technology of autonomy and artificial intelligence is racing forward.
When the countries last met, in April 2016, DeepMind’s AlphaGohad just beaten world champion Go player Lee Sedol in a head-to-head match — an unprecedented feat for a computer. But just a few weeks ago, DeepMind published a paper on its new AlphaGo Zero, which taught itself the game without human-supplied training data and, after a mere three days of self-play, defeated the older program in 100 straight games. Between those two events, the world’s countries held no substantive meetings on autonomous weapons — excepting only last year’s decision to bump the discussions up one rank in the diplomatic hierarchy.
A consortium of over 60 non-governmental organizations has called for an urgent ban on the development, production, and use of fully autonomous weapons, seeking to halt such work before it begins in earnest. Yet at this stage a legally binding treaty is almost inconceivable. The UN forum that the nations are using, the awkwardly named Convention on Certain Conventional Weapons, operates by consensus — meaning that although 19 nations have said they would back a ban, any one of the other 105 can veto it. Even advocates of a ban agree that the diplomatic process is “faltering financially, losing focus [and] lacks a goal.”
Four years ago, the first diplomatic discussions on autonomous weapons seemed more promising, with a sense that countries were ahead of the curve. Today, even as the public grows increasingly aware of the issues, and as self-driving cars pop up frequently in the daily news, energy for a ban seems to be waning. Notably, one recent open letter by AI and robotics company founders did not call for a ban. Rather, it simply asked the UN to “protect us from all these dangers.” Even as more people become aware of the problem, what to do about it seems less and less clear.
There are many reasons why a ban seems unlikely. The technology that would enable autonomous weapons is already ubiquitous. A reasonably competent programmer could build a DIY killer robot in their garage. Militaries are likely to see autonomy as highly useful, as it will give them the ability to operate machines with faster-than-human reaction times and in environments that lack communications, such as undersea. The risk to innocent civilians is unclear – it is certainly possible to envision circumstances in which self-targeting weapons would be better at some tasks than people. And the most difficult problem of all is that autonomy advances incrementally, meaning there may be no clear bright line between the weapon systems of today and the fully autonomous weapons of the future.
So if not a ban, then what?
There has been some halting diplomatic progress over the past few years in exploring the role of humans in using lethal force in war. This idea is expressed in different ways, with some calling for “meaningful human control” and others suggesting “appropriate human judgment” is necessary. Nevertheless, there is growing interest in better understanding whether there is some irreducible place for humans in lethal force decisions. The 2016 diplomatic meetings concluded with states agreeing to continue to discuss “appropriate human involvement with regard to lethal force,” a compromise term. None of these terms are defined, but they express a general sentiment toward keeping humans involved in lethal force decisions on the battlefield. There are good reasons to do so.
Machine intelligence today can perform well at tasks like object recognition, but lack the ability to understand context. Machines can identify the objects in a scene, but not knit them together into a coherent story. They can tell whether a person is holding a gun, but not whether the person is likely to be an insurgent or a farmer defending his property, which could depend on other surrounding clues. These limitations point to the value of human cognition, perhaps used in concert with machine intelligence.
Over time, as artificial intelligence advances, machines may be able to overcome these and other limitations. Asking where human judgment is needed in war gets to more fundamental questions, though: Even if we had all of the technology we could imagine, what decisions would we want humans to make in war, and why? Are there some tasks in war that humans ought to do, not because machines cannot do them, but because machines should not? Some decisions in war have no clear right answer, such as weighing how many civilian deaths are acceptable in accomplishing some military objective. It may be preferable to know that humans have made these decisions, which involve weighing the value of human life, even if someday machines may be capable of doing so. Similarly, human moral responsibility for killing can be a check on the worst horrors of war. What would the consequences be if no one felt responsible for the killing that occurred in war?
Technology is rapidly bringing us to a fundamental threshold in humanity’s relationship with war. The institutions that humanity has for dealing with this challenge – slow, cumbersome diplomatic processes like the Convention on Certain Conventional Weapons – are imperfect. Nevertheless, the CCW is the best forum that exists for nations to come together to address this issue. If countries are going to get ahead of the technology, they’ll need to move faster.
No comments:
Post a Comment