ADAM ELKUS
NOVEMBER 11, 2015
Deputy Secretary of Defense Robert Work recently announced that the core of the United States’ new “offset strategy” to counter emerging operational and technological threats from Russia and China will be a “centaur” approach of teaming humans with machine intelligence of varying levels of autonomy. The offset strategy, often described as a “competitive strategy,” aims to convince a putative group of future opponents that the cost of opposing the United States is too high by teaming humans and machines. The new offset, Work argues, is necessary to continue to militarily support larger U.S. policy goals imperiled by military-technical developments in competing states such as Russia and China. How do the fundamental components of Work’s efforts — competitive strategy and human-machine teaming — go together? What are the obstacles that Work and his colleagues will need to surmount to make the new offset strategically successful?
Competitive Strategies and Human–Machine Teaming
Competitive strategy is a concept borrowed from the business theories of Michael Porter and adapted to defense contexts by Andrew Marshall and his disciples. It originates from Marshall’s belief that, if the United States and the Soviet Union were engaged in a long-term peacetime struggle which could erupt into war at any point in time, the United States should identify key asymmetries, areas of comparative advantage, and weaknesses in both its own long-term defense planning and those of its peacetime competitors. As noted by other contributors in this series, the original offset strategy during the late Cold War was presented as a form of competitive strategy.
In the 1970s, American planners saw increasingly competent Soviet conventional forces as an asymmetric threat under the umbrella of mutual nuclear deterrence. If the Soviets were able to make a successful surprise attack with conventional forces, NATO would likely have to respond with tactical nuclear weapons — which would in turn obviate the principle of flexible response. This fear ushered in a new period of thinking about non-nuclear strategic strike capabilities, which came to be known in the United States as the second offset strategy and in the USSR as another revolution in military affairs.
As this implies, competitive strategy had multiple possible implications. First, it hoped to revitalize the prospect of conventional deterrence and defense against Soviet forces. Second, it aimed to broaden the spectrum of options for accomplishing higher U.S. objectives. Planners did not want to be solely dependent on nuclear weapons to stop a hypothetical Soviet advance. Most importantly, competitive strategy could accomplish these aims by convincing the Soviets that they could not keep up with the rapidly growing American techno-war machine. Today, the offset strategy is mostly remembered positively, but some believed at the time that offset-era surveillance and strike systems actually posed a threat to peace and could not be controlled. This concern grew out of the way in which the offset increased the speed and complexity of combat and challenged human control through pervasive automation. Such concerns were neither new nor limited to conventional war technologies. They grew out of earlier fears about human and machine decision-making in nuclear command and control and also paralleled fears that DARPA efforts to build artificial intelligence for automating anti-missile defense and conventional command and control would lead to an unintended catastrophe.
To explain why American decision-makers would even consider such automation, it is important to explain the concept of the “centaur.” Work’s reference to “centaurs” was not to the mythical beast, but rather to centaur chess, a style of chess played by human-machine teams. A machine could carry out brute force search and a human could evaluate the results and apply higher-level reasoning and intuition to the decision-making. “Centaur” work has in some shape or form occupied U.S. strategic decision-makers since World War I. Why? The problem that American defense decision-makers face today (and not for the first time) is that victory on modern battlefields depends on limited and fragile humans operating complex sociotechnical systems that leave little slack for error.
Full automation of those systems is out of the question. Research has revealed the “irony of automation”: Replacing humans to make work easier often creates unintended problems. Instead, teaming humans and machines together might achieve the best of both worlds. Understanding the best organization, fusion, and direction for human–machine systems has preoccupied the U.S. defense-industrial complex for many years. We owe modern computer systems, for example, to computing pioneer J.C.R. Licklider’s desire for “man–computer symbiosis.” In 1960, Licklider famously defined his mission as follows:
The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them.
This paragraph is legendary in computer science, as it paved the way for the very computer used to write the article you are reading and the computer you are reading it on. What is decidedly less well known is its military origins. Licklider sounds very much like Work when laying out the technical problem to be solved, using the problem of planning a battle on a slow and inefficient computer as one of the applications for his man–computer symbiosis concept.
Assessing the New Offset
In sum, Work is trying to do two things: react to emerging strategic asymmetries and change the strategic balance by investing in a new set of “centaur” capabilities. Since modern warfare strains, challenges, and sometimes breakshuman command and control over the flow of battle, human–machine teaming will both compensate for existing human flaws and maximize the potential for new and effective operational concepts and doctrine.
The obstacle Work and others face is technical and psychological in nature. Achieving effective human and machine collaboration is difficult, especially concerning adaptive machine intelligence systems. Further, human cognitive limitations will inevitably arise in the control and interaction interface linking human controllers and machine systems they control. Two key variables will be how much progress in these areas of research and technology is needed to make Work’s plan succeed, and how much risk the military is willing to tolerate when deploying volatile combinations of naturally stupid humans and artificially intelligent but narrow and often brittle computer systems on the front lines.
Other challenges are institutional and political in nature. Will the Department of Defense have the “adoption capacity” to implement an innovation such as human–machine teaming? We shall see, but history suggests plenty of reasons why Work’s effort could be crushed by bureaucratic intertia. Given growing opposition to machine decision-making in war, the Department of Defense also faces legal, ethical, and public relations barriers to implementing its desired program of innovation. These issues will also complicate efforts to attract talented technical researchers and engineers.
It is also not clear how the notional opponents will react. Due to the wide convergence between the technologies needed to implement the new offset strategy and commercial efforts in artificial intelligence, robotics, and machine learning, the United States no longer possesses an effective monopoly on technical expertise in intelligent computing. If Work is correct that the technologies in question are viable and game changing, then the intended targets of the new offset’s peacetime strategy may respond in kind, if they are not already.
Finally, one other important risk is simply that the “human” side of the equation is lost in “human–machine team.” To succeed, the effort must not only create new operational concepts rooted in human–machine teaming, but also allow existing understandings, practices, and doctrines to be represented on a machine and processed cooperatively with human teammates in the course of operations. This is neither a purely technical nor a purely intellectual project — both parts of the human–machine “centaur” will have to learn how the other “thinks” and acts if the whole is to be operational.
An Uncertain Offset
Time will ultimately tell whether the new offset achieves its strategic aims. Given the novelty of the technologies involved and the complexity of the challenges they pose, researchers and analysts in strategy should partner with technical experts in the computing sciences to ponder their implications for war and peace and to study these issues and their future implications and risks. With the new offset, America is making a gamble with ambiguous long-term consequences for great-power war and peace. No matter if Work manages to surmount the obstacles to a new offset, the results of his initiative will surely affect all of us.
Adam Elkus is a PhD student in Computational Social Science at George Mason University and a columnist at War on the Rocks. He has published articles on defense, international security, and technology at CTOVision, The Atlantic, the West Point Combating Terrorism Center’s Sentinel, and Foreign Policy.
No comments:
Post a Comment