LETHAL AUTONOMOUS WEAPONS
-- Maj Gen P K Mallick, VSM(Retd)
Introduction
Lethal autonomous weapons (LAWs) are a type of military robot designed to select and attack military targets (people, installations) without intervention by a human operator. LAW are also called lethal autonomous weapon systems (LAWS), lethal autonomous robots (LAR), robotic weapons, or killer robots. LAWs may operate in the air, on land, on water, under water, or in space. The autonomy of current systems as of 2016 is restricted in the sense that a human gives the final command to attack - though there are exceptions with certain "defensive" systems.
They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
In May 2017, the first Meeting of Experts on Lethal Autonomous Weapons Systems was held at the United Nations in Geneva. The participants recognized the potential of AWS to alter radically the nature of war, as well as a variety of ethical dilemmas such weapons systems raise. Worldwide concern has been growing about the idea of developing weapons systems that take human beings “out of the loop,” though the precise nature of the ethical challenges to developing such systems, and even possible ethical benefits, have not yet been clearly identified.
Issues
The idea of fully autonomous weapons systems raises a host of intersecting philosophical, psychological and legal issues. For example, it sharply raises the question of whether moral decision making by human beings involves an intuitive, non-algorithmic capacity that is not likely to be captured by even the most sophisticated of computers? Is this intuitive moral perceptiveness on the part of human beings ethically desirable? Does the automaticity of a series of actions make individual actions in the series easier to justify, as arguably is the case with the execution of threats in a mutually assured destruction scenario? Or does the legitimate exercise of deadly force should always require a “meaningful human control?” If the latter is correct, what should be the nature and extent of a human oversight over an AWS?
“Killer Robots,” by their very nature, violate the ethics and laws of war. Robots cannot discriminate between combatants and civilians, because we cannot program a computer with the specification of what a civilian is. there is no way for a robot to make the proportional decisions required by International Humanitarian Law. it requires a specifically human form of judgment to decide whether a certain number of civilian casualties and damage to property is proportional to the military advantages gained.
Such debates have a philosophical dimension: robots cannot die, and so cannot understand the existential gravity of the decision to kill. We cannot hold robots accountable for their actions. Who then do we hold to account? The human commander? If the robot malfunctions or makes a terrible decision, who is to be blamed? The programmer? The manufacturer? The policymakers?
On the other side of this debate are those who argue that robots will make war less destructive, less risky and more discriminate. Human perception and judgment are inherently limited and biased, and war far too complex for any human mind to grasp. Some of the worst atrocities in war are due to human weakness. Emotions like fear, anger, and hatred or mere exhaustion can easily cloud a soldier’s judgment on the battlefield. From this point of view, the objection that robots will never be able to think and act like humans is anthropocentric and misses the point.
The question for advocates of lethal autonomous weapons is not whether the technology can mimic human psychology, but whether we can design, program, and deploy robots to perform ethically as well, or better, than humans do under similar circumstances.
The focus on lethal machine autonomy obscures how autonomous technology concentrates immense firepower in the hands of a few human beings. The crucial issue
here is not that of lethal machine autonomy, but of the capacity for humans to exert meaningful autonomy in the lethal human-machine interactions that will define future wars.
Lethal autonomous weapons will greatly expand the potential scope of violence, at the very moment when the complexity and speed of war has moved beyond the human ability to follow. This growing gap between the immense human capacity for violence and a limited capacity for judgment is perhaps the most dangerous implication of such technology.
Mapping the Development of Autonomy in Weapon Systems
Stockholm International Peace Research Institute recently has published the Report, Mapping the Development of Autonomy in Weapon Systems. It presents the key findings and recommendations from a one-year mapping study on the development of autonomy in weapon systems.
What are the technological foundations of autonomy?
· Autonomy has many definitions and interpretations, but is generally understood to be the ability of a machine to perform an intended task without human intervention using interaction of its sensors and computer programming with the environment.
· Autonomy relies on a diverse range of technology but primarily software. The feasibility of autonomy depends on the ability of software developers to formulate an intended task in terms of a mathematical problem and a solution; and the possibility of mapping or modelling the operating environment in advance.
· Autonomy can be created or improved by machine learning. The use of machine learning in weapon systems is still experimental, as it continues to pose fundamental problems regarding predictability.
What is the state of autonomy in weapon systems?
· Autonomy is already used to support various capabilities in weapon systems, including mobility, targeting, intelligence, interoperability and health management.
· Automated target recognition (ATR) systems, the technology that enables weapon systems to acquire targets autonomously, has existed since the 1970s. ATR systems still have limited perceptual and decision-making intelligence. Their performance rapidly deteriorates as operating environments become more cluttered and weather conditions deteriorate.
· Existing weapon systems that can acquire and engage targets autonomously are mostly defensive systems. These are operated under human supervision and are intended to fire autonomously only in situations where the time of engagement is deemed too short for humans to be able to respond.
· Loitering weapons are the only ‘offensive’ type of weapon system that is known to be capable of acquiring and engaging targets autonomously. The loitering time and geographical areas of deployment, as well as the category of targets they can attack, are determined in advance by humans.
What are the drivers of, and obstacles to, the development of autonomy in weapon systems?
· Strategic. The United States recently cited autonomy as a cornerstone of its strategic capability calculations and military modernization plans. This seems to have triggered reactions from other major military powers, notably Russia and China.
· Operational. Military planners believe that autonomy enables weapon systems to achieve greater speed, accuracy, persistence, reach and coordination on the battlefield.
· Economic. Autonomy is believed to provide opportunities for reducing the operating costs of weapon systems, specifically through a more efficient use of manpower.
The main obstacles are:
· Technological. Autonomous systems need to be more adaptive to operate safely and reliably in complex, dynamic and adversarial environments; new validation and verification procedures must be developed for systems that are adaptive or capable of learning.
· Institutional resistance. Military personnel often lack trust in the safety and reliability of autonomous systems; some military professionals see the development of certain autonomous capabilities as a direct threat to their professional ethos or incompatible with the operational paradigms they are used to.
· Legal. International law includes a number of obligations that restrict the use of autonomous targeting capabilities. It also requires military command to maintain, in most circumstances, some form of human control or oversight over the weapon system’s behaviour.
· Normative. There are increasing normative pressures from civil society against the use of autonomy for targeting decisions, which makes the development of autonomous weapon systems a potentially politically sensitive issue for militaries and governments.
· Economic. There are limits to what can be afforded by national armed forces, and the defence acquisition systems in most arms-producing countries remain ill-suited to the development of autonomy.
Where are the relevant innovations taking place?
· At the basic science and technology level, advances in machine autonomy derive primarily from research efforts in three disciplines: artificial intelligence (AI), robotics and control theory.
· The United States is the country that has demonstrated the most visible, articulated and perhaps successful military research and development (R&D) efforts on autonomy. China and the majority of the nine other largest arms-producing countries have identified AI and robotics as important R&D areas. Several of these countries are tentatively following in the US’s footsteps and looking to conduct R&D projects focused on autonomy.
· The civilian industry leads innovation in autonomous technologies. The most influential players are major information technology companies such as Alphabet (Google), Amazon and Baidu, and large automotive manufacturers (e.g. Toyota) that have moved into the self-driving car business.
· Traditional arms producers are certainly involved in the development of autonomous technologies but the amount of resources that these companies can allocate to R&D is far less than that mobilized by large commercial entities in the civilian sector. However, the role of defence companies remains crucial, because commercial autonomous technologies can rarely be adopted by the military without modifications and companies in the civilian sector often have little interest in pursuing military contracts.
The changing character of war
Automated weapons are not merely new tools of war; they also change the very conditions of war itself. Innovations in robotics and artificial intelligence open up new possibilities, which will to some extent dictate the goals and strategies of future military operations. The dispersion of military power, made possible by autonomous technology, is already transforming military thinking. War is becoming less like a traditional conflict between clearly defined centers of power, and more like a global network of diffuse battlefields and highly mobile and dispersed firepower, further eroding the conventional distinction between “home front” and “battlefront.” The new swarm technology will contribute to this development, with small, fully autonomous drones dropping out of a “mothership” and returning hours later. Such technology promises to enhance military intelligence capacities, but once in existence there is nothing to stop the military from arming the drones. Imagine a swarm of drones, equipped with biometric data and orders to find and kill specific individuals, groups of individuals, or everyone in a designated area. Swarm technology, promoted by the industry as relatively inexpensive, could also fall into the hands of non-state actors.
Recent Developments
In August, more than 100 of the world’s leading robotics and AI pioneers called on the UN to ban the development and use of killer robots. The open letter, signed by Tesla’s chief executive, Elon Musk, and Mustafa Suleyman, the founder of Alphabet’s Deep Mind AI unit, warned that an urgent ban was needed to prevent a “third revolution in warfare”, after gunpowder and nuclear arms. So far, 19 countries have called for a ban, including Argentina, Egypt and Pakistan.
Recently academics, non-governmental organisations and representatives of over 80 governments gathered at Palais des Nations for a decisive meeting on the future of LAWS. Organised under the Convention on Certain Conventional Weapons (CCW), the meeting was chaired by Amandeep Gill, permanent representative of India to the Conference on Disarmament. For countries that are hard at work nurturing the integration of technology into their domestic economies, the weaponisation of artificial intelligence represents yet another chasm that will require significant resources and immense R&D to overcome. Countries that are relatively ahead in the game are concerned with retaining their strategic advantage while not inadvertently kick-starting another global arms race. A loose coalition of technologists, academics and non-governmental organisations, gathered under the ominous sounding ‘Campaign to Stop Killer Robots‘ has instead cited the inadequacy of protections under international humanitarian law and the trigger-happy tendencies of technologically advanced nations to call for a pre-emptive ban on autonomous weapons.
Other countries, primarily ones that have developed and deployed weapons with semi-autonomous capabilities, have refused to endorse a ban. The US, that recently launched the ‘Sea Hunter‘, an autonomous submarine capable of operating at sea for months on its own, clarified that it will continue to promote innovation while keeping safety at the forefront. Similarly, Germany which has been fielding the automated NBS Mantis gun for forward base protection, called a ban premature. Russia echoed this position, warning against alarmist approaches that were “cerebral and detached from reality”.
Many AI experts gathered at the meeting seemed to share the notion that the threat associated with uncontrollable LAWS is far more severe than the possible benefits of more accurate targeting that may reduce loss of civilian casualties. One expert called LAWS the next weapons of mass destruction, owing to the ability of a single human operator to launch a disproportionately large number of lethal weapons.
A video, depicting autonomous explosives-carrying microdrones, wreaking havoc was screened at a side event organised by the Campaign to Stop Killer Robots. The movie portrays a brutal future. A military firm unveils a tiny drone that hunts and kills with ruthless efficiency. But when the technology falls into the wrong hands, no one is safe. Politicians are cut down in broad daylight. The machines descend on a lecture hall and spot activists, who are swiftly dispatched with an explosive to the head.
The short, disturbing film is the latest attempt by campaigners and concerned scientists to highlight the dangers of developing autonomous weapons that can find, track and fire on targets without human supervision. They warn that a preemptive ban on the technology is urgently needed to prevent terrible new weapons of mass destruction.
The video, produced by Stuart Russell of the Future of Life Institute, has been criticised by others in the scientific community for sensationalism – that screening the video at a gathering whose mandate is to separate fact from apocalyptic fiction, is unhelpful.
Amidst the two ends of the spectrum, the CCW has managed to move the debate forward on issues relating to the use of autonomous weapons. The fact that a minimum amount of human control be retained and that the use of these systems be governed by IHL.
India for its part, advised for balancing the lethality of these weapons with military necessity – adopting a wait-and-watch approach to how the conversation evolves.
The question of human control which has been discussed at length both at the GGE and in conversations leading up to it, has concluded that at the bare minimum, human must retain operational control over these weapons – for instance, the ability to cancel an attack on realising that civilian lives may be endangered. However, the particulars remain elusive, due to the lack of uniformity and specificity of language used. While many countries agree on the need for ‘meaningful human control,’ few have offered clarifications on what ‘meaningful control entails.’ In an attempt to de-mystify these understandings, the US has offered, ‘appropriate level of human judgement over the use of force’ as a more accurate method of framing the issue.
Nonetheless, many issues remain unresolved even at the conclusion of the GGE. Technical questions around the operational risks associated with LAWS remains unanswered. Will technologically sophisticated weapons be vulnerable to cyber-attacks that can hijack control? How will deployment of LAWS change the strategic balance between nations? Are weapons review processes under Article 36 of Additional Protocol I of the Geneva Conventions, adequate to ensure that LAWS are compliant with the international humanitarian law? These and many other questions were highlighted by the Chair’s report, and remain to be resolved by the next iteration of the GGE in 2018.
As the chair, Gill put it, the distance between the attacker and the target has been increasing since the beginning of time. Have we finally arrived at a point where that distance is unacceptable?
It-and-watch approach to how the conversation evolves.
No comments:
Post a Comment