Pages

21 January 2017

Swarming the Battlefield: Combat Evolves Toward Lethal Autonomous Weapons

LEVI MAXEY

War, often rationalized as an extension of policy by violent means, has always been a deeply human experience. It defines much of human history and, unsurprisingly, changes in technology accompany—and are often driven by—adaptations in the conduct of warfare. Battles are increasingly fought at distance, progressing from the thrust of a spear to the click of a button that launches a Hellfire missile from a Predator drone flying thousands of miles away.

Already, decisions of war are reliant on fixed lines of code marching to the tune of predetermined algorithms, swaying the perceptions of military commanders and soldiers alike with their outputs. New advances in artificial intelligence and deep machine learning are further pushing limits of augmenting command and control toward autonomous robotics capable of their own decision-making.

But what has spurred the military development of these autonomous systems and at what point does their advancement create dangers that outweigh the strategic opportunities they present?

In abstract, the development of advanced artificial intelligence for autonomous military systems is intended—as part of the U.S. military's Third Offset strategy—to leverage strategic capabilities against near-peer adversaries. The intention is to develop a technological advantage over potential enemies of war in the 21st century. The level of autonomy within these systems is increasing, but the technology has yet to reach the stage of lethal autonomous weapons, where a human no longer has “meaningful control” over the application of lethal force because no one is selecting the target, making the decision to kill, or because no human is supervising the weapon’s operations.

The U.S. military has already begun developing semi-autonomous capabilities as shown with the 103 micro Perdix drones ejected from the flare dispensers of three F/A-18 Hornet fighter jets flying over California during a test conducted by the Strategic Capabilities Officer in October. The “fire and forget” robotic swarm relies on a shared, cognitive hive-mind, autonomously capable of crowding targets designated by a human operator. They could conduct reconnaissance, hunt enemy forces with small explosive charges, jam enemy communications, provide a self-healing communications network for remote operations behind enemy lines, and fool enemy radars while conducting persistent surveillance over a designated area. While larger drones like the Predator, Reaper, and Global Hawk have proved effective in targeted killing campaigns against insurgents, they are prime targets for communications jamming and the anti-air missiles of advanced nation-states—an uneconomical interdiction tactic for addressing the cheaper and more numerous micro drones. China has also begun the development of its own swarms of larger, fixed-wing drones.

Autonomous marine vehicles are likewise in development. The Office of Naval Research first demonstrated swarm boats in 2014, but a human had to tell the boats which vessels to swarm. Now the boats autonomously separate friend from foe, using images fed into the Control Architecture for Robotic Agent Command and Sensing system. The Defense Advanced Research Project Agency’s Sea Hunter is intended to eventually track enemy stealth submarines for up to 10,000 miles while using sonar and pattern recognition to autonomously navigate minefields.

Ground robotics are more difficult due to the terrain, but the U.S. Army is currently planning for semi-autonomous combat vehicles that have leader-follower capability, GPS way-point navigation, and obstacle detection and avoidance—all rigged with remote-controlled weapons. Russia seeks to complement its special forces with an autonomously navigating miniature tank, designed as a mobile bomb for pre-selected targets when instructed.

U.S. Defense Secretary Ashton Carter said in October “that when it comes to using autonomy in our weapons systems, we will always have a human being in decision-making about the use of force.” This echoes the Pentagon’s 2012 Directive 3000.09 limiting the military’s weapons to those “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

However, some argue this policy will change. Doug Wise, the former Deputy Director of the Defense Intelligence Agency (DIA), argues that “these systems will not just be autonomous in terms of traditional flight, float or submersible operations, but artificially intelligent to the point of self-awareness, allowing them to instantaneously—without human intervention—adapt to new situations, threats, targets, and a multitude of tasks on the battlefield.” He maintains that the Third Offset will entail “a combatant commander with hundreds—if not thousands—of platforms all needing intelligence updates and all providing intelligence updates. The traditional human battlefield commander will be incapable of keeping pace.”

The prospect of lethal autonomous weapons systems has created justifiable concern internationally. ­Mary Wareham, the Advocacy Director at the Arms Division of Human Rights Watch and the Global Coordinator of the Campaign to Stop Killer Robots, argues that “retaining human control over use of force is a moral imperative and essential to promote compliance with international law, and ensure accountability.” Wareham goes on to suggest that “critics dismissing these concerns depend on speculative arguments about the future of technology and the false presumption that technical advances can address the many dangers posed by these future weapons.”

Such concerns have led some to call for a preemptive ban on development of lethal autonomous weapons. Last month, nine House Democrats submitted a letter calling for a ban and international progress in raising awareness of the issue was also made. Near the end of 2016, the Convention on Conventional Weapons formalized deliberations over autonomous weapons by establishing a Group of Governmental Experts that is expected to include more than 90 countries, various United Nation agencies, the International Committee of the Red Cross, and the Campaign to Stop Killer Robots.

It is likely moral concerns over the sanctity of life, compliance with international law, and liability in the incidence of war crimes will be central in the discussions expected to take place this year. Countries must first, however, agree on a definition of what a lethal autonomous weapon actually is. Vague definitions surrounding “meaningful” or “appropriate” human control over the use of force are insufficient.

Even advocates of lethal autonomous weapons agree it is too early to deploy them, as the technology is not yet selective enough to limit friendly and civilian casualties. Instead, they argue leaders should support the development of lethal autonomous weapons not only for their strategic capabilities, but also to bolster their position in any future arms control negotiations. At the same time, the development of lethal autonomous weapons by one country will undoubtedly force others to pursue their development as well, further instigating an arms race that could have a devastating impact the conduct of war.

While Wareham believes “2017 could be a year for substantial progress in tackling these weapons systems”—asserting “a prohibition is now firmly within reach”—Wise disagrees. Instead, he foresees a future in which “the speed at which weapons systems will have to act when going from intelligence collection to kinetic action will no longer allow for determination by human beings.”


No comments:

Post a Comment