By Lucas Bento
February 26, 2015
Autonomous weapons can be disruptive, but that doesn’t mean we should ban them altogether.
Technology can be both disruptive and transformative. It can disrupt the status quo and it can also transform the way we see how things can be done.
States around the world have showed great interest in developing autonomous robots for military purposes. These robots, also called “killer robots” in some circles, would be able to select and engage targets without human intervention. The implication of ceding human control for machine discretion has led some organizations to call for an outright ban of the technology. In a 2012 report entitled ‘Losing Humanity,’ Human Rights Watch recommended the ratification of an international treaty prohibiting the “development, production, and use” of robotic weapons because they would be incapable of discriminating between combatants and civilians on the battleground. Other coalitions, such as the Campaign to Stop Killer Robots, have formed around similar policy goals.
Although the prospect of delegating lethal powers to robots may evoke a dystopian vision of the future, the pessimism is arguably misplaced, and misunderstands the potential of artificial intelligence. Whatever the moral merits or precautionary logic of a complete ban, total prohibition of autonomous robots is undesirable for a number of reasons.
First, a ban would be unworkable in practice. It would ignore the practical complexities of international cooperation. Without the ratification of major military powers, a ban would be impossible to enforce. The temptation for states to cheat is also obvious. Given the sensitive nature of military technology, states may, despite a ban, preemptively develop autonomous weapons just to stay in the race. This of course is a classic example of the prisoner’s dilemma, and explains why a state would likely not cooperate with a ban, but instead heavily arm itself.
This dynamic is particularly true in this day and age where non-state actors, such as terrorist organizations, are increasingly powerful and active on the international stage. As technology becomes more affordable and technical skill readily accessible, it is only a matter of time before non-state actors use automated weapons. U.S. authorities have already uncovered terrorist plans of drone attacks, and the New York Police Department is also taking these threats seriously.
In their upcoming book The Future of Violence, Professor Gabriella Blum and Benjamin Wittes argue that advances in cyber technology and robotics could mean more people than ever before have access to potentially dangerous technologies. The trend towards the dissemination of open source software, which could make a killer robots’ software widely available, coupled with the increased affordability and versatility of hardware makes that scenario all the more plausible.
Second, a toothless ban would undermine a more realistic alternative: to regulate the use and application of autonomous weapons through law and best practices. International humanitarian law (IHL) already regulates the means and methods of warfare. In order to comply with IHL, an autonomous robot would need to be able to distinguish between combatants and civilians and use force proportionally. These requirements are all within the ambit of technological possibility.
Unlike a human being, a robot can be programmed to comply with rules and codes free from fear, prejudice, and fatigue. It may thus be able to better process information, identify targets, and protect civilians. It could tap into big data to further improve its decisions. As an added method of accountability, robots could be required to wear a camera in order to monitor compliance with IHL.
Terminator-like robots are unlikely to be deployed on the battlefield anytime soon. Autonomous robots will likely first be used on very specific missions, thus controlling their scaling and impact, and will be developed iteratively, thus facilitating feedback and future improvements.
Finally, autonomous weapon systems can provide major benefits to international peace and security. Given the potential for machine learning, autonomous robots could be more precise, discriminate, and effective than other weapons. Alternative configurations of these systems could also be used in peacekeeping missions around the world. They may help safeguard humanitarian convoys, protect refugee camps, and assist hostage rescue missions.
To ban now what we do not yet fully understand would stifle efforts to develop potentially beneficial technologies. Alan Turing, one of the pioneers of artificial intelligence, may have preferred a trial and error approach. “One must experiment with teaching [a] machine and see how well it learns … There is an obvious connection with this process and evolution,” wrote Turing in a 1950 paper. An encouraging prognosis from the man who built a code-breaking machine that helped the Allies win the Second World War.
These reasons should not be misunderstood as an apology for the militarization of international relations. Rather, they point to the transformative power of autonomous robots in the world’s quest for global security. Instead of pushing for a ban, let’s agree on acceptable standards of conduct.
Lucas Bento is lawyer in New York specializing in international dispute resolution.
No comments:
Post a Comment