Dennis Murphy
In June 2021, an automated identification system tracked two NATO ships close to Russian-occupied territory. Such a system is designed to track and monitor the location of ships while at sea. The usefulness of such a tool is intuitively obvious to anyone who wishes to engage in platform-heavy domains of conflict, or to intuit the strategic and operational inclinations of adversaries. There was only one problem: their location was falsified.[1] This was not the first time and will certainly not be the last. In July, Mark Harris warned that phantom ships are rapidly becoming the “latest weapons in the global information war.”[2]
Misinformation in naval warfare, or military statecraft more broadly, is far from new. Leading one’s adversaries into thinking you possess a greater number of vessels, or that the bulk of your forces are deployed to a different location, was standard practice in pre-modern naval campaigns. Though satellite tracking has largely made surface vessels' location transparent, submarines continue the tradition of stealth. Under the sea, the fog of war remains as prevalent today as it ever was.
Unfortunately, a troubling mindset exists, all too common among modern practitioners of strategy and operations. This pedagogy is increasingly incompatible with the fog of war. In this mindset, the idealization of the future of warfare is one where command and control is increasingly centralized, local commanders have near total battlespace awareness, and all-knowing algorithms will enable the near erasure of the fog of war. From the grand strategic to the tactical, all decisions will be informed by near-total domain awareness.
Such a mindset is understandable. Increased coordination, battlespace awareness, and reliance on algorithms has been central to the modern American way of war. Recent efforts surrounding Joint All-Domain Command and Control (JADC2) are merely the latest in a long line of innovations comprising this trend.[3]
White Noise (Jorge Stolfi/Wikimedia)
There’s just one major flaw: noise.
The Infinite Nature of Facts
We often forget that the potential points of information one can state about a particular thing are nearly infinite. While this may seem strange at first, it is a problem known in other disciplines as well. There is a well-known paradox about the mapping of coastlines and rivers. With every level of granularity, the length of the coastline increases. At a certain point, our measurements rise almost to infinity. As such, there is a point at which we must throw our hands up in the air and say that we have understood what is an acceptable amount of information about a particular environment. The coastline of Tonga is not nearly infinite, and not at all comparable to the United States. We know this to be true, but we do not seem to be aware that this same paradox emerges when discussing nearly any aspect of data.
As our tools become capable of picking up more information, we will see exponential growth in the number of derivable facts from those tools. This could be a combinatorially explosive number of pixels in a given area for satellite imagery. For other sensors, this could mean picking up increasing variation in radio telemetry. Regardless of the instrument employed, this pattern will hold true. At a certain point, increasing our awareness of precise fluctuations in the surface of the world’s oceans or of the grains of sand on a beach will become useless, even if we could reliably know and understand everything we are seeing. More than that, algorithms trained on possessing monstrously large quantities of data will be flawed when employed in low-information conditions. Given that understanding everything that we see is impossible, we are increasingly forced to rely on algorithms to sort information into usable forms. This is a disaster waiting to happen.
In our quest to preempt strategic surprise, we are fostering the development of a system that will need to sort through truly unwieldy quantities of information. At a large level, human beings can verify most information that comes in, but as the amount of information grows, so too will the manpower required to understand it. Given limitations on the number of strategic thinkers and analysts that can be employed, tools enabled by artificial intelligence will be required. With every additional level of granularity acquired, so too will there be a greater reliance upon the tools we use to create order out of this chaos of our own making. This creates two vulnerabilities from within the system, and fosters another major vulnerability from outside of it.
Calculators Are Not Wise
The first vulnerability lies within the algorithms themselves. Too often, there is a romantic techno-fetishism permeating those who work with technology. That is, they believe the algorithm can do whatever they want and will work as intended. This is, of course, rubbish. Most algorithms are only as good as the programmer who creates them. At the heart of all artificial intelligence tools is a calculator. The calculator has no awareness of its action. It can only do what it is asked to do and will do so correctly. A major issue arises if there are no error signals when the algorithm is superficially correct, but catastrophically wrong at a systematic level. When dealing with simple units of analysis, or when the algorithms are constantly checked against the real world, catastrophic errors are potentially easily detectable. This is not true when algorithms deal with increasingly complex data.
When there is too much information to sort through, the programmers lose the ability to understand their machines. As we increase the power of our tools, we necessarily forfeit some fraction of our ability to double check our machines. This is not a new problem; we have known about it for years.[4] Given the amount of information some practitioners want our systems to be able to go through and understand, we will likely not be able to understand that something has gone wrong until we fail to engage with the adversary effectively, or some error emerges after months have gone by.
It is an irony for the ages that as our confidence in our ability to obtain a greater understanding of the strategic landscape of the twenty-first century grows, our real understanding will necessarily decrease. The armies of the future will make life and death decisions based on algorithms we do not, and cannot, fully understand. As we are increasingly pressured to reduce the presence of humans in conflict, from unmanned aircraft and vessels to an increased reliance on special forces, we will continually lose our ability to check in real time the validity of the artificial construction of reality our algorithms generate. If we ever fully make such a transition, we will be like the blind men who thought themselves to be all-seeing in ancient times.
Your Discriminators Can (and Will) Be Tricked
There is a never-ending war being waged among the algorithms involved in machine learning. This conflict is fought between generators and discriminators. The goal of the generator is to create something that can fool the discriminator. The goal of the discriminator is to detect the generator. This is a process that is integral to machine learning, and it is what makes spoofing possible at a higher level. As the generator gets better at faking results, the discriminator must get better at detecting them. This is how it is possible for artificial intelligence to generate realistic images of fake persons.[5] It is simple and easy to think of ways this can be used to foster wide-spread disinformation campaigns, and I wrote some sketches about the possibility in 2019. This has practical implications for the battlefield as well.
Machine learning requires the ability to recognize novel information. When you teach an algorithm to detect an object and correctly classify that object, you are training a discriminator. When one wants to be able to tell that an F-35 is, indeed, an F-35, you need to provide your algorithm with an enormous amount of training data on the F-35. You then test your algorithm by exposing it to new variations of data containing an F-35. As your discriminator improves, it is able to increase its accuracy on picking up instances of an F-35. In a world where everything is what it appears to be, this is where one might be tempted to stop. If something is an F-35, your discriminator should eventually get good enough to detect that it is an F-35.
F-35 (Liz Kaszynski/Lockheed Martin)
When you want to see if you can generate fake data that will trick your discriminator into thinking an F-35 is present where it is not, you will employ a generator. The generator, when trained against a discriminator, will be able to detect patterns your discriminator is using in order to understand how to trick it. And just as one can trick the algorithm into seeing something that is not there, an adversary can camouflage itself. Since the amount of information these algorithms receive is mind-blowingly large, some hideously simple patterns may emerge to break the discriminator. Such a thing would never pass visual inspection, but visual inspection is rendered impossible because of the amount of data one would need to sort through. You will not know that such an exploit exists until you are either incredibly lucky, or something blows up.
False positives for ships at sea can be the result of either your algorithms screwing up naturally on their own, or of adversarial intent. Unless you are in a competitive environment, you will have no reason to know for sure whether your algorithms were tricked or if they were merely stupid. As command-and-control systems become increasingly centralized, either error will be magnified. If a command thinks there is an adversarial fleet just off the shore of Alaska, it may devote air and sea resources to oppose it. Such an ability would naturally compromise our strategic and operational effectiveness elsewhere. Given events in the Black and North Seas, this is a vulnerability we should be especially on guard against. Strategic plans predicated on flawed or manipulated intelligence, surveillance, and reconnaissance capabilities will be doomed to fail.
Every time we react to something we think we see, we are providing information to the adversary about the vulnerabilities within our systems. Unless our systems are compromised and the adversary can infiltrate our communications and our artificial intelligence tools, an opposing state should have limited abilities to determine the efficacy of their spoofing attempts. If we ever start seeing complex spoofing efforts that lead us to believe platforms are present where they are not, we should not overreact. Rather, we should quietly analyze what caused the error, and correct for that error while making sure we maintain heightened levels of secrecy at every step.
If our systems ever do become compromised, however, it would be utterly disastrous. Should we ever succeed in completing our centralization of command and control, then if that were to be compromised, even for a moment, an adversary would know the location, capabilities, mission, and personnel of our forces to the same degree that our command and control did. This, coupled with hypersonic weapons, could be the short-term precursor to an imminent military defeat. This is a topic for another day, however.
Abundant Information is Habit Forming
When one’s decision-making paradigm is centered around possessing large amounts of information, its ability to function in low-information environments will become compromised. A JADC2 model of warfare is incompatible with information black-out conditions. If our commanding officers are trained to act as if they always possess information about the location of both adversarial and allied forces, it is worth questioning whether that training will serve them well in an environment where both friend and foe are uncertain. This is a necessary drawback of employing a system that seeks to banish the fog of war from war: what do you do when that system fails?
So much of our modern world is connected to the internet of things that it is often difficult to imagine an environment where the internet of things falls apart. Complicated relay information through satellite communications could be rendered impossible shortly after the onset of hostilities with a near-peer adversary. While a more localized form of command and control could be established around a particular fleet, this too can be disrupted through sophisticated and novel jamming technologies. Small drones that are difficult to target may be employed to damage the fidelity of communications and on-ship sensory capabilities, for instance.
So much of our modern world is connected to the internet of things that it is often difficult to imagine an environment where the internet of things falls apart.
Warfighting abilities would necessarily revert to older, slower, and less reliable means of communication and coordination to function in such an environment. Leaders would need to be comfortable with making high stakes decisions with low amounts of reliable information. The cloud is not permanent. Systems are vulnerable,[6] and often the technology that underpins them can be unreliable.[7] The type of omniscience that some practitioners desire is only possible in the most idealized conditions when facing a non-peer adversary. With that in mind, we can start to understand this mindset as a product of the Gulf War and the Global War on Terror. If we do not find a way to discard this liability, then our first conflict with a near-peer adversary will be yet another casualty of the twenty years of war we fought in Afghanistan.
Problems at the operational level will become magnified at the strategic level as policy planners continue to develop long-term plans based upon flawed or compromised models. How many disasters in human history have come about because leadership implemented strategies based upon groundless certainties? As we move to pursue all-of-government efforts to develop grand strategic plans, our efforts will be doomed to fail if we predicate our long-term plans on always knowing nearly everything about the balance of power across all DIME-FIL (Diplomatic, Information, Military, Economic, Financial, Intelligence, and Law enforcement) domains.
Sorting Through the Noise
It will be necessary for the modern strategist and military commander, regardless of which domain in which they operate, to be comfortable with the inherent fallibility of their tools. While sophisticated algorithms can generate convincingly granular models of reality, commanders should always maintain a healthy degree of skepticism regarding the information they receive. There is a natural trade-off between the volume of data and the ability of one’s staff to verify what they are reporting. Information sorted by artificial intelligence at high scales largely operates outside of programmer’s awareness, and is thus well beyond the capabilities of any individual commander to completely understand.
This imperfect awareness necessarily means the models used will generate false positives. Tricking the system is a natural way an adversary will attempt to influence the battlespace of the future, either through spoofing or by camouflaging one’s own forces. Even if such threats could be mitigated, there is no system that is invulnerable. Disruptive technologies could very well be employed to dramatically reduce the efficacy of our tools, perhaps to the point where they become inoperable. Therefore, the modern military commander should also be comfortable operating in low information environments.
Contemporary strategists are at risk of losing sight of the real world when they uncritically embrace the fruits of the information revolution without understanding the fragility and vulnerabilities of the infrastructure of artificial intelligence that underpins it.
It is unfortunate that this is necessary not because of a lack of technology that we could make up for, but rather that it is an inherent problem of the very systems and paradigms we are employing to navigate the future of warfare. This does not mean we should abandon our approach. Instead, we must ensure that we are resilient in our approach to modern warfare.
We are at our most vulnerable when we believe unquestioningly both in the efficacy of our tools, and in the permanence of their presence.
No comments:
Post a Comment