Pages

7 May 2018

Five questions on autonomous weapons and the future of war

By: Kelsey Atherton
Source Link

A RQ-8A Fire Scout Vertical Takeoff and Landing Tactical Unmanned Aerial Vehicle System takes off for a flight demonstration. One of the looming questions for future wars is the role of autonomous systems. We are living in an era that blurs the line between science fiction and technological present. This is felt acutely in consumer electronics, but it also has tremendous implications for the future of war. Into this moment comes “Army of None: Autonomous Weapons and the Future of War,” a new book examining the present, past, and future of autonomous weapons by Paul Scharre, a former U.S. Army Ranger now heading a program at the Center for New American Security focused on technology and security.


In it, Scharre interrogates everything from our definition of autonomy (thermostats, properly considered, are an autonomous system) to arms control throughout history and the laws of war. To get a glimpse into the that future, and its implications for both the physical world and cyberspace, C4ISRNet spoke with Scharre. The interview below has been lightly edited and condensed for clarity.

Gatling guns and robot puns

C4ISRNET: Why start the history of autonomous weapons with the Gatling gun, a forerunner to the machine gun?

Paul Scharre: It wasn’t my original intention to do so. My thought was that I needed to ground readers in how autonomy already worked in weapons, and that I needed to tell the story of how the technology was evolving. I figured that would help readers understand trend lines in autonomy towards the future. Plus, I often find that there can be a lot of misconceptions about how much autonomy already exists in today’s weapons.

Originally, I figured I would need to go back a few decades, maybe to the 1970s. Then ― actually to my surprise ― I realized that the first homing munitions dated to World War II. Thinking about it even more, I realized that the machine gun had some automation (although not intelligence), and that I should talk about that. And of the course the story of the machine gun really begins in the American Civil War with the invention of the Gatling gun. Then when I read Julia Keller’s “Mr. Gatling’s Terrible Marvel,” I realized I had stumbled across pure gold with Gatling’s story.

Here was this incredible cautionary tale for autonomous weapons. His intention was to reduce the number of soldiers on the battlefield and save lives, and yet the exact opposite happened. His invention unleashed a whole new wave of destruction. With machine guns, whole generations of men could be mowed down. And this is precisely because of automation, because automating the firing increases the destructive power that one person can bring to bear. But that didn’t reduce the number of soldiers on the battlefield overall. It just increased the amount of destruction delivered. Even more importantly, the political and military leaders at the time in World War I failed to grasp the horrors that this new technology would bring. So the parallels to autonomous weapons today were just too incredible to ignore it. Gatling’s story has all of the elements of arguments both for and against autonomous weapons today.

C4ISRNET: What’s your elevator pitch for the one thing you would want someone without any background in the subject to understand about autonomous weapons?

Scharre: We are quickly moving to an era where machines may be able to make life and death decisions on their own on the battlefield. The technology is coming, and in some ways is already here today. The real question is what we do with that technology. How do we use this technology to make warfare more humane and precise, without losing our humanity in the process?

Autonomy in space and also time

C4ISRNET: Much of the discussion of autonomy focuses on the time between a human command to attack and the time it takes for a weapon to select which target to hit. Mines, given indefinite time to select a target, are almost universally abhorred for this reason. Do you think weapons that process targeting decisions quickly, even from a vast array of permissible targets, will be more widely accepted than ones that take time to act?

Scharre: There are a few relevant timeframes here. The first is the time between when the human authorizes an attack and the time it takes for the weapon to carry out that attack. Obviously, the longer that this time is extended, the greater risk that something happens that changes the context for the strike. This is the War of 1812 problem ― communications are so bad at the time that they fight the Battle of New Orleans after the war has ended. The shorter that one can make this gap between the human decision and the machine’s action, the better. For a cruise missile today, it can be maybe 90 minutes or so. (For homing munitions that don’t loiter to search for targets over a wide area, this gap in time is really just the time of flight/travel to the target area.)

The second relevant time is the moment the weapon becomes inactive and is no longer able to engage targets. For some weapons, like a cruise missile, this is the same as the time it strikes a target. It doesn’t loiter. For autonomous weapons, though, the weapon may loiter for a window in time to search for targets. For the Israeli Harpy, this is about 2.5 hours right now. One could easily imagine weapons that search for longer periods in the future. Maybe eight hours. Maybe days, months, or even years. The window for landmines is years.

The length of this time window matters because the context for use could change in a way that might make the engagement no longer desirable. Perhaps the target moves into a new area where there are civilians present. Or civilians could enter the target area. The target itself could change its status (for example, if it’s a person, the person could surrender or be rendered hors de combat). Friendlies could enter the area. Or perhaps the war ends. For mines, of course the problem is that they are effectively unbounded in time. They can persist long after the war ends.

All of these issues revolve around this concern that you might delegate autonomy to a machine to perform a task, but then the context for use changes. So what are the bounds on that autonomy? It isn’t only about the task itself, but also how much freedom in time, space, etc. the machine has to perform the task.

I tend to think more comprehensively about autonomy in not just time, but also the freedom the weapon has in geographic space, target set, etc. To your question of whether weapons that act more quickly will be more permissible, I think the answer is that weapons that have less freedom (aka autonomy) will be more permissible. So a weapon that has a shorter time window in which to search for targets, a more constrained geography, and narrower target set, etc. will be more permissible (militaries will be more comfortable using it) than one that has more freedom in all of these dimensions, not just time. The closer and more direct that there is a connection between the human decision and the machine’s action, the more acceptable the weapon will be.

I would hesitate to say that weapons that act quicker will be more acceptable, because super-fast reactions are one rationale for autonomy, and those can raise their own issues if they are *too* fast for humans to be involved in (e.g., high frequency stock trading). Superhuman reaction speeds are not a recipe for close human control either.

Code at war

C4ISRNET: Should we be more or less worried about autonomous weapons in cyberspace than we are about ones in meatspace?

Scharre: Cyberspace for sure. I’m deeply concerned about the intersection of autonomous weapons and cyberspace, both cyber vulnerabilities in physical autonomous weapons and cyber autonomous weapons. I think there a number of reasons why autonomy in cyberspace is more alarming:

- It’s happening sooner. There is already a lot of autonomy in cyberspace. It is a digitally native environment. The challenges that you see in getting physical robotics to navigate the world don’t exist in cyberspace, which is a machine domain.

- Because it is a machine domain, there is the potential for interactions to happen at machine speed in a way that is much more challenging in physical space. And there are incentives to delegate autonomy for this reason that don’t exist as strongly in physical space. The physical of moving mass around means that there is more time for humans to react, rather than in cyberspace where interactions can occur at superhuman speeds and some automated systems may be required.

- Cyber has inherent scalability in size that is much harder in physical space, and this scalability can allow for larger effects. Worms replicate themselves and spread across networks. Physical robots aren’t going to replicate themselves (at least not yet). So the consequences of a runaway piece of malware could be far worse than a runaway robotic system. Conficker infected millions of computers.

- A lot of really important things are connected to the internet. Like ... everything. If the internet ground to a halt, you wouldn’t even need to attack power grids or cause physical destruction. Just shutting off the internet would be crippling to a modern country.

- For whatever reason, the DoD policymakers I speak with seem less concerned about control problems and risk in cyberspace. Because it isn’t physical, I think they shrug it off. And I don’t think that’s right. It’s something that worries me quite a bit.

C4ISRNET: What’s the topic from your book you most want to see others in the national security community build upon in the future?

Scharre: With regard to the community of people working on autonomous weapons, I think there’s value in continuing to explore the role of the human in warfare. This is expressed in different ways ― meaningful human control, appropriate human judgment, etc. Whatever your favorite term, I think it’s valuable to have a continued discussion about the role of the human in warfare, and I would like to see more people begin to write about this topic and try to flesh it out. I think there are a few interesting angles from which to tackle the problem: 1) Historical practice in military operations to-date. What do we already do? 2) The law. What, if any, minimum necessary standard of human involvement in lethal force decisions does the law require? 3) Military professional ethics and command-and-control. What does military best practices suggest? 4) Human psychology. If we want humans to feel morally responsible for killing in warfare ― if nothing else to avoid the moral hazard of people feeling totally detached and not responsible for what occurs in war ― then what does human psychology suggest about the degree of human involvement that is required?

With regards to the wider audience, I think we need to focus on the intersection of cyberspace and autonomous weapons. Both cybersecurity of autonomous weapons and autonomy in cyber tools/weapons. I would like to see more of that conversation.

No comments:

Post a Comment