Pages

12 August 2015

Killer Robots: Programed Slaughter?


As I have argued previously (“The One Thing Geeky Defense Analysts Never Talk About”), when we discuss defense policies we often leave out the most salient aspect of the conversation: new tactics, new strategies, and new military technologies have the singular purpose of more effectively threatening or killing other human beings.

Last week, I covered the story (“Is a Killer Robot Arms Race Inevitable?”) of an open letter signed by 1,000 artificial intelligence and robotics researchers, including Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis, and Professor Stephen Hawking, that calls for a ban of “offensive autonomous weapons beyond meaningful human control.”

The petition, while noble, will be difficult to live up to given the dialectical nature of military competition. However, what is immediately more striking is the euphemism-filled debate on the subject we are having (or rather not having). Current discussions focus on technical details and legal aspects surrounding the deployment of killer robots. For example, Kelley Sayler in a piece for Defense One makes the following argument:


On the contrary, autonomous weapons could shape a better world — one in which the fog of war thins, if ever so slightly. These systems are not subject to the fatigue, combat stress, and other factors that occasionally cloud human judgment. And if responsibly developed and properly constrained, they could ensure that missions are executed strictly according to the rules of engagement and the commander’s intent. A world with autonomous weapons could be one where the principles of international humanitarian law (IHL) are not only respected but also strengthened.

But this does not compute. For starters: killer robots will not and cannot strengthen international humanitarian law. Why? Because – nomen est omen – killer robots (like remotely-controlled weapon systems) will make the killing of other human beings easier and consequently increase the likelihood of “superfluous injury or unnecessary suffering” during war.

The rule here is simple: the more you kill without having to face any consequences (i.e. you do not have to put yourself in harm’s way), the messier any war will likely become. Commanders will favor this technological panacea and induce more “prophylactic” killing because—next to executing his mission— eliminating any foreseeable danger to his troops is a commanding officer’s first and foremost responsibility. If a mission can be carried out without risk of loss, autonomous weapons are a no-brainer and thus entail the risk of abuse. (This hypothetical can, of course, only work as long as the other side has not technologically caught up with you.)

In addition, arguing that the negation of fatigue, combat stress “and other factors that occasionally cloud human judgement” would reduce the fog of war (and consequently deliver a “cleaner war”) misses a crucial point. The decisions during a war that end up inflicting the most harm and human loss are not taken by people “in the trenches” but by those in the rear (think of the My Lai Massacre versus “Operation Meetinghouse” or the Malmedy massacre versus the notorious German “Commissar Order” at the outset of the German invasion of the Soviet Union). Consequently, arguing that reducing human error would make war cleaner shows a fundamental lack of understanding of the nature of warfare. As Lord Raglan said in Charge of the Light Brigade: “It will be a sad day for England when her armies are officered by men who know too well what they are doing—it smacks of murder.”

It goes without saying that autonomous weapon systems can be employed for a host of different military tasks apart from harming humans, yet any missions of killer robots –again nomen est omen—“strictly executed according to the rules of engagement and [a] commander’s intent” will still involve killing the enemy.

The Pentagon’s loose directive on the use of autonomous weapons, stating that they “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force” is also not very helpful in guaranteeing that those weapons will not be abused. And, furthermore, judging the appropriate level of the use of force from a distance during a battle has more often failed than succeeded in reducing human suffering (think of the Chateau Generals of the First World War).

My major point is the following: Outsourcing direct contact with an enemy to death-dealing bots will shield us from the horrors of war and its human dimension—one of the key psychological barriers that often make us reconsider applying force to solve a problem—and lure us into more conflict. ”It is well that war is so terrible, or we should grow too fond of it,” Robert E. Lee noted during the U.S. Civil War, capturing a terrible truth about the alluring nature of battle.

Thus, while killing is often no longer an intimate experience on the modern field of battle, we, paradoxically, should preserve the awful unmediated face of war for the sake of our own humanity and our soldiers’ honor. For while there is honor in serving one’s country in non-combat related functions and doing one’s duty during times of war, it is the soldier who exposes himself to enemy fire and risks being killed and wounded who is revered and admired in warrior culture. Why? Because that’s what we ideally mean by war—a (somewhat) fair trial of arms. Otherwise, conflict would no longer fit our definition of war and simply could be re-named slaughter, which as a corollary would undermine the American soldier’s warrior ethos.

The problem with applying euphemisms in policy discussions is the following: they circumvent the truth. Euphemisms and engineered language employed by D.C. think tankers with close proximity to the Pentagon, but distant from any actual battlefield, are particularly problematic. However, it is the logical consequence of taking the American Way of War, whose primary feature is distance, to the extreme (See: “The Unique American Experience of War”).

No comments:

Post a Comment