By Mark Gilchrist
Military theorists should be circumspect when asserting that revolutionary changes in war are afoot based on the potential of emergent technology alone. Change is certainly part of the enduring nature of military competition, and it is hard to dispute that technological advancements are reshaping what might be possible in the conduct of warfare. However, advanced militaries are yet to master the integration complexities offered by the last generation of informational change, let alone being positioned to exploit the profound challenges offered by the next wave.[1] Existing integration challenges are likely to be magnified in the future due to the absence of cohesive strategy and nested operational concepts designed to guide the military application of emergent technology for the future fight. As this article seeks to highlight, absent a clear understanding of which military problems emergent technologies are required to solve, there is, perhaps, too much confidence in their ability to reshape the character of the next war by enabling decisive battlefield advantage. More troublingly, predictions about machine-dominated warfare risk obscuring the human cost implicit in the use of violence to achieve a political objective.
This article examines the integration challenge that continues to limit the military potential of available technology. It will then look specifically at why militaries should be cautious about the role artificial intelligence and autonomous systems are expected to play in future warfare. Artificial Intelligence has many potential applications; however, this article focuses on the areas that will ultimately determine how a military response is generated. The delivery of the right information, to the right decision-maker, at the right time, and the ability for all elements of the force to react to that decision is fundamental to success in battle. Therefore, command and control systems and the implications of artificial intelligence for the already stressed network architectures that enable them should be of paramount importance when considering how to generate lethal effect in the information age.
At the heart of this article is the notion that technological change must be understood from a historical perspective.[2] This ensures that the allure of technology’s potential does not distract from the crafting of policy, strategy, and operational concepts required to navigate the very human challenges facing the national security community.
THE CHALLENGE
Much contemporary professional discourse is dedicated to how artificial intelligence, big data, advanced robotic and autonomous systems, among others, will define the character of the next war: to an extent this proposition is true. Missing, however, is a clear conception of just how these advances will interact in any meaningful way –– particularly with regard to the challenges of effective integration with existing systems and network architectures, as well as the impact on military command and control. As Franz-Stefan Gady argues:
In today’s tech-crazed world, where many of us see technological solutions as a panacea to just about anything, defense analysts have a tendency to overestimate the impact of technological changes and new innovations on warfare.
The character of war is emergent, and best understood through the crucible of combat.[3] As Lawrence Freedman has recently reminded us, rarely have even the most brilliant analysts accurately predicted what the character of the next war will look like –– particularly when the interaction of new technologies actually nullified anticipated advantages. Military futurists should recall the complex and largely unpredictable interplay between belligerents ensures war’s character is subtly adjusted based on the adaptation of each side to the objectives, actions, and capabilities of the other. As such, the current focus on technical capabilities must not detract from developing policy, strategy, and operational concepts that guide how an evolving military instrument can be used as an effective tool.
THE CRITICAL CHALLENGE ADVANCED MILITARIES FACE IS NOT PREDICTING HOW EMERGENT TECHNOLOGY WILL DELIVER DECISIVE ADVANTAGE. RATHER, IT IS THE RESHAPING OF LARGE MILITARY BUREAUCRACIES SO THAT THEY ARE BEST POSTURED TO INTEGRATE THE CURRENTLY UNKNOWABLE TECHNOLOGICAL POTENTIAL.
Former U.S. Deputy Secretary of Defense, Bob Work, stated on several occasions: “The Third-Offset is not about technology as much as the operational concepts and organization constructs that will shape the way we integrate and use the technology.”
This statement highlights the tendency to focus on technology (means) rather than the strategy, concepts (ways) and political objectives (ends) it must enable. The critical challenge advanced militaries face is not predicting how emergent technology will deliver decisive advantage. Rather, it is the reshaping of large military bureaucracies so that they are best postured to integrate the currently unknowable technological potential to enhance what the future fighting force can deliver in support of policy aims and objectives.
HOW CAN ARTIFICIALLY INTELLIGENT, DEEP LEARNING, AUTONOMOUS SYSTEMS WORK WITH (OR WITHOUT) HUMANS TO ACHIEVE THE MILITARY OBJECTIVES THAT CREATE THE CONDITIONS ULTIMATELY REQUIRED FOR A POLITICAL RESOLUTION OF CONFLICT?
Rather than focusing on advances to discrete capabilities, the organizational focus must be on how these technologies might work together in a coherent manner across a range of possible future warfare scenarios. Instead of blithely accepting that certain emerging technologies will come to dominate future war, advanced militaries must question: how can artificially intelligent, deep learning, autonomous systems work with (or without) humans to achieve the military objectives that create the conditions ultimately required for a political resolution of conflict? This question should lead to an understanding of possible strategic options and the development of supporting operational concepts. These in turn should drive agile capability development, avoiding the stove-piped procurement that has previously created unforeseen integration challenges.
Integration must be understood as the means by which the various combat systems within a joint force are linked together to accelerate the delivery of military effects. The greatest challenge to an integrated joint force is the focus on platforms that deliver effects, rather than the information and communications architecture that links these capabilities in a meaningful way. While integration between platforms and across systems remains a secondary focus for capability acquisition — rather than the core upon which all capability decisions are based — network architectures will struggle to maximise each platform’s contribution to a joint force’s combat power.
Of particular concern is the lack of an appropriate information/data backbone to enable a federated, cross-domain solution that prioritises the information exchanges required for effective command and control. Reliable and resilient network architectures are the critical requirement of a digitised, artificial intelligence-enabled future force tasked to coordinate the actions of autonomous systems. Yet, they are also likely to be its most vulnerable element. Indeed, in the congested and contestedinformation environment in which it is assumed modern militaries will operate, the pervasive, seamless network solution required to link technological advances in any meaningful way is highly unlikely to persist.[4]
SUN TZU WARNED, “TACTICS WITHOUT STRATEGY IS THE NOISE BEFORE DEFEAT.” IN A SIMILAR VEIN, TECHNOLOGY WITHOUT INTEGRATION, OR A CONCEPTUAL UNDERPINNING, IS THE HYPE BEFORE THE LETDOWN.
This is not, however, a challenge unique to Western militaries. Russia and China have invested billions in upgrading existing capabilities, yet very few of these platforms were designed to seamlessly integrate artificial intelligence or partner with autonomous robotic systems. This means retro-fitting is required, making emergent technology an integration problem of similar magnitude to that faced by their Western competitors. Accordingly, it is perhaps overly pessimistic to believe Russia and China have the advantage in militarizing emergent technologies. Culturally, it is also likely to take significant time and effort for China and Russia to adapt entrenched command and control approaches to absorb the shock generated by the introduction of artificial intelligence and autonomous systems. To expect, therefore, that Russia and China will be more successful with the introduction of artificial intelligence and autonomous systems than the United States is perhaps to underestimate the impact of entrenched military culture the world over, and in turn to overestimate available capability.
Advanced militaries must be careful to avoid a situation where military technical determinism guides strategic thinking. Indeed, if the experience of the last twenty years has taught us nothing else, it is that technological superiority does not guarantee military success. Sun Tzu warned, “tactics without strategy is the noise before defeat.” In a similar vein, technology without integration, or a conceptual underpinning, is the hype before the letdown. Technology enables tactical actions to advance a strategy — but technology cannot replace tactics. Militaries cannot mitigate a lack of strategy nor support operational concepts purely through a reliance on the promise of technology, especially as technology is unlikely to be either fully integrated across the force or available at the critical time.
ARTIFICIAL INTELLIGENCE MAY NOT BE THE ANSWER
As Michael Horowitz notes: “The promise of [artificial intelligence] — including its ability to improve the speed and accuracy of everything from logistics and battlefield planning to human decision making — is driving militaries around the world to accelerate research and development.”[5] This has led some to suggest that artificial intelligence could potentially reduce the human cost of war. Artificial intelligence is, however, unlikely to craft strategy or design the operational concepts to guide its own employment. Artificial intelligence could, and likely will, influence the crafting of strategy and almost certainly the subordinate operational concepts that guide its execution. However, there is no a priori reason why any strategy or operational concept should succeed if applied in an incongruent context. Artificial intelligence could be an aid for strategic success like any other means might, but there is no reason to believe it a panacea for specific problems of future warfare. Artificial intelligence could, also, be a hindrance too.
Rather than solving contemporary integration challenges, artificially intelligent, autonomous machines are almost certain to create new ones. Integration is a constant challenge in a perpetually changing technological context, and thus we should expect more rather than less integration challenges on the far side of attempts at integrating artificial intelligence into our force structures. In fact, artificial intelligence is likely to be among the greatest military integration challenges due to the additional complexities it creates for network architectures (particularly at the tactical level) which are already behind industry best practice. Researchers are also now recognising that it is not simply the artificially intelligent systems that matter in this bold new era of military competition. Rather, the critical nexus is the means by which artificially intelligent capabilities are integrated within military command and control structures, processes, and systems to maximise their latent potential. Command and control will be forced to adapt to maximise the potential of technological advances. However, there is currently neither the proven technology nor the organisational impetus to do so.
Furthermore, before seeking to hand over responsibilities to artificially intelligent machines, it is essential to understand the potential risks associated with the technology. Artificial intelligence relies on humans to assist in the process of becoming intelligent: from writing algorithms to labelling data, such tasks are as fallible to human mistakes, biases, and oversights as any other inherently human interaction. However, perhaps of greatest concern is the inability of machine-learning systems to explain the logic behind the conclusions they reach. Critically, the potential inability of humans to understand machine decision-making criteria for the use of force offers ethical challenges unique in the history of warfare.
MILITARY PROFESSIONALS SHOULD, THEREFORE, BE CAUTIOUS ABOUT THE IMPACT OF UNPROVEN TECHNOLOGY ON WAR’S POSSIBLE CHARACTER IN THE DIM FUZZINESS OF FIFTY YEARS HENCE, WHEN A FOCUS ON THE CONCEIVABLE CHARACTER OF THE NEXT WAR IS PERHAPS A MORE PRESSING CONCERN.
The risk of machine bias and the consequent errors in judgement resulting from overreliance on artificial intelligence should be much more concerning to us than their human cognitive equivalents. At least we are capable of recognising human frailty, of understanding individual and collective bias. It is much more difficult to achieve the same with machines whose deep learning may have been distorted from the outset. To assume that machines are above bias and misjudgement, or to presume that we can always alleviate the impact of it, is to invite surprise and potentially disaster. Indeed, evolving technologies may make the conduct of future battles more difficult to predict rather than less.
As a dual-use technology facing rapid commercial as well as military development, for every advance in artificial intelligence there is likely to be a corresponding defense or counter in the cyber domain which will undermine any “decisive” impact it can have. By necessity, artificial intelligence is driven and sustained by advanced computing power and network connectivity. Therefore, its influence is likely to be just as circumscribed by targeted cyber actions (particularly computer network attack, but also computer network exploitation) as any other networked capability. Given its reliance on connectivity and power, this vulnerability to cyber disruption has potential to critically undermine any command and control or combat system built on artificial intelligence’s persistent availability.
Similar to the flawed predictions of early 20th century airpower theorists there is a growing tendency to see future warfare dominated by particular technologies. This reinforces the importance of separating science fiction from science fact. Despite some break-through advances in artificial intelligence development, Michael Anissimov estimates that it could take until 2060-2070 for it to reach the level of maturity required to satisfy many imagined military purposes. While this should not prevent militaries from taking heed of artificial intelligence’s potential applications, it also should not distract from resolving urgent force modernization challenges like network capacity and combat system integration.
UNTIL SYSTEMS, CONCEPTS, AND CAPABILITIES ARE TESTED IN COMBAT OVER A SUSTAINED PERIOD OF TIME AGAINST AN ACTUAL ADVERSARY THE CHARACTER OF FUTURE WARFARE IS MERELY A BEST GUESS. A FOCUS ON THE PROMISE OF UNPROVEN TECHNOLOGY ABSENT A CLEAR UNDERSTANDING OF HOW IT WILL EVENTUALLY ACHIEVE STATED POLITICAL OBJECTIVES IS FOLLY.
If there is to be another great power conflict then it is just as likely to occur in the next few decades as it would be to occur in the latter half of the 21st century. Military professionals should, therefore, be cautious about the impact of unproven technology on war’s possible character in the dim fuzziness of fifty years hence, when a focus on the conceivable character of the next war is perhaps a more pressing concern.
CONCLUSION
Advanced militaries are only just beginning to perceive some of the different factors that may, in time, come to shape the character of future wars. Despite their best efforts, no nation-state can ever be fully prepared for the character of the next war. As such, the potential military applications of emergent technology should not be viewed as a panacea for the chaos, friction, and chance that will continue to define war in the future. Regardless of the possibilities offered by technological advances, success in the next conflict is likely to be just as reliant on human genius (though no doubt supported by increasing human-machine teaming) as any conflict on which humanity has previously embarked.[6]
Until systems, concepts, and capabilities are tested in combat over a sustained period of time against an actual adversary the character of future warfare is merely a best guess. It is important that advanced militaries seek to harness conceivable technological advantages. This must, however, be part of a balanced approach that recognises the primacy of policy and strategy, and the criticality of concepts that address integration first rather than last. Indeed, a focus on the promise of unproven technology absent a clear understanding of how it will eventually achieve stated political objectives is folly. Worse still is to believe that such technology promises decisive battlefield advantage, whereas, in fact, it offers significant vulnerabilities to exploit.
Mark Gilchrist is a serving Australian Army Officer. The views offered here are his own and do not reflect any official positions.
This article appeared originally at Strategy Bridge.
NOTES:
[1] In the context of this article “advanced militaries” is taken to mean the United States, China, and Russia. However, the issues identified are just as relevant to all militaries grappling with the challenges of integrating emergent technology alongside legacy equipment and capabilities.
[2] As the American economist Robert Gordon argues, the period from 1870 – 1970 saw the greatest cumulative technological advances in human history, spurred in large part by the influence of the internal combustion engine. Since 1970, Gordon contends that the rate of change at a macro level has decelerated and been “channelled into a narrow sphere… having to do with entertainment, communications and the processing of information.” While these latter areas are of critical importance to military forces they do not change the nature of war, nor do they play the most critical part in defining its character. This is not the first time that rapid advances in communications have forced reconsideration of how information can and should be processed for greatest military advantage. The use of the telegraph in the U.S. Civil War and the emergence of wireless telephony in the First World War are but two examples of informational changes that continued to build on the speed of information passage pioneered by their predecessors in the armies of Rome or the Mongol Empire.
[3] Combat is tangible. Theorists can talk about the possible character of war, but the physical interaction –– combat –– is where the rubber hits the road where one can see how different approaches, capabilities etc actually come together to define a character of the conflict. The character of war changing is also best understood through the combat involved as it should reflect changes in policy, ethical considerations, strategy, etc. Everything is theoretical until combat occurs, only then it becomes reality.
[4] Militaries operate in austere environments where increasingly they will rely on, or compete with, civilian information infrastructure that is vulnerable to disruption. In order to achieve the high bandwidth coverage required to make the most of modern (let alone future) combat systems, military forces will be increasingly reliant on airborne or satellite (rather than terrestrial) communications links for redundancy to terrestrial systems. The direct targeting of terrestrial, airborne and satellite links (which must be assumed will occur) will likely unhinge the exquisite network architectures upon which most military technology relies, greatly diminishing potential lethality.
[5] Professor Genevieve Bell articulates in her lecture to the Royal Australian Air Force’s 2018 Air Power Conference – Air Power in a Disruptive World, an apt description of artificial intelligence. She states: “AI is a constellation of technologies that runs from data to machine learning and sensing and algorithms, to include I would actually argue both ethics and the data that fuels that whole cycle. And artificially intelligent technologies are with us now and functioning, but they are the beginning of a much longer transformation that we need to be paying attention to. So, every time I say AI imagine that it is shorthand for something much more complicated.”
[6] This continues the journey started by man’s partnership with horses then refined over the ages with soldiers operating mechanised vehicles and now manned vehicles partnered with unmanned systems. All, however, are reliant on the human ability to make sense of chaos, exploit the fog of war, and create decision from disorder.
No comments:
Post a Comment