Abhijnan Rej
Effective war-fighting demands that militaries should be able to peek into the future. As such, victory in battle is often linked with clairvoyance.
Let me explain. Suppose you are leading a convoy in battle and expect to meet resistance at some point soon. If you could see precisely where and when this is going to happen, you can call in an airstrike to vastly diminish the enemy's forces, thereby increasing your chances of victory when you finally meet it.
While modern satellites and sensors linked with battle units provide such a capability — first demonstrated by the US military with striking effect in the 1991 Gulf War — the quest for such a capability has been around as long as wars — which is to say, forever. Watch towers on castles with sentries, for example, are also sensors albeit highly imperfect ones. They sought to render the battlefield "transparent", to use modern terminology, in the face of cavalry charges by the enemy.
At the heart of this quest for battlefield transparency lies intelligence, the first key attribute of warfare. Our colloquial understanding of the word and its use in the context of war can appear to be disconnected, but the two are not. If "intelligence refers to an individual's or entity's ability to make sense of the environment", as security-studies scholar Robert Jervis defined it, intelligent behaviour in war and everyday life are identical. It is, to continue the Jervis quote, the consequent ability "to understand the capabilities and intentions of others and the vulnerabilities and opportunities that result". The demands of modern warfare require that militaries augment this ability using a wide array of technologies.
The goal of intelligent warfare is very simple: See the enemy and prepare (that is to observe and orient) and feed this information to the war fighters (who then decide on what to do, and finally act through deployment of firepower). This cycle, endlessly repeated across many weapon-systems, is the famous OODA loop pioneered by maverick American fighter pilot, progenitor of the F-16 jet, and military theorist John Boyd beginning in 1987. It is an elegant reimagination of war. As one scholar of Boyd's theory put it, "war can be construed of as a collision of organisations going through their respective OODA loops".
To wit, the faster you can complete these loops perfectly (and your enemy's job is to not let you do so while it goes about with its own OODA loops), the better off you are in battle. Modern militaries seek this advantage by gathering as much information as it is possible about the enemy's forces and disposition, through space-based satellites, electronic and acoustic sensors and, increasingly, unmanned drones. Put simply, the basic idea is to have a rich 'information web' in the form of battlefield networks which links war fighters with machines that help identify their targets ahead. A good network — mediated by fast communication channels — shrinks time as it were, by bringing the future enemy action closer.
Representational image. AFP
The modern search for the decisive advantage that secret information about enemy forces often brings came to the fore with the Cold War, driven by the fear of nuclear annihilation in the hands of the enemy. In the mid-1950s, the United States Central Intelligence Agency’s U-2 spy planes flew over large swathes of Soviet territory in order to assess enemy capabilities; its Corona satellite programme, also launched roughly around the same time, marked the beginning of space-based reconnaissance. Both were among the most closely guarded secrets of the early Cold War.
But the United States also used other — more exotic — methods to keep an eye on Soviet facilities to have an upper hand, should war break out. For example, it sought to detect enemy radar facilities by looking for faint radio waves they bounced off the moon.
The problem with having (sophisticated) cameras alone as sensors — as was the case with the U-2 planes as well as the Corona satellite — is that one is at the mercy of weather conditions such as cloud cover over the area of interest. Contemporary airborne- or space-based radars, which build composite images of the ground using pulses of radio waves, overcome this problem. In general, radar performance does not depend on the weather despite a famous claim to the contrary. That said, these 'synthetic aperture radars' (SAR) are often unable to pick up very fine-resolution details unlike optical cameras.
The use of sensors is hardly limited to land warfare. Increasingly, underwater 'nets' of sensors are being conceived to detect enemy ships. It is speculated that China has already made considerable progress in this direction, by deploying underwater gliders that can transmit its detections to other military units in real time. The People's Liberation Army has also sought to use space-based LIDARs (radar-like instruments which use pulsed lasers instead of radio-waves) to detect submarines 1,600 feet below the water surface.
Means of detection of course are a small (but significant) part of the solution in battlefield transparency. A large part of one's ability to wage intelligent wars depend on the ability to integrate the acquired information with battle units and weapon-systems for final decision and action. But remember, the first thing your enemy is likely to do is to prevent you from doing so, by jamming electronic communications or even targeting your communications satellite using a missile of the kind India tested last March. In a future war, major militaries will operate in such contested environments where a major goal of the adversary will be to disrupt the flow of information.
Artificial intelligence (AI) may eventually come to the rescue to OODA loops, but in a manner whose political and ethical costs are still unknown. Note that AI too obeys the definition Jervis set for intelligence, the holy grail being the design of all-purpose computers that can learn about the environment on their own and make decisions autonomously based on circumstances.
Such computers are still some way in the future. What we do have is a narrower form of AI where algorithms deployed on large computers manage to learn certain tasks by teaching themselves from human-supplied data. These machine-learning algorithms have made stupendous progress in the recent years. In 2016, Google AlphaGo — a machine-learning algorithm — defeated the reigning world champion in a notoriously difficult East Asian board game setting a new benchmark for AI.
Programmes like AlphaGo are designed after how networks of neurons in the human brain — and in the part of the brain responsible for processing visual images, in particular — are arranged and known to function biologically. Therefore, it is not a surprise that the problem of image recognition has served as a benchmark of sorts for such programmes.
Recall that militaries are naturally interested in not only gathering images of adversary forces but also recognising what they see in them, a challenge with often-grainy SAR images, for example. (In fact, the simplest of machine-learning algorithms modelled on neurons — the Perceptron — was invented by Frank Rosenblatt in 1958 using US Navy's funds.) While machine-learning programmes until now have only made breakthroughs with optical images — last year, in a demonstration by private defence giant Lockheed Martin, one such algorithm scanned the entire American state of Pennsylvania and correctly identified all petroleum fracking sites — radar images are not out of sight.
Should AI programmes be able to process images from all wavelengths, one way to bypass the 'contested environment problem' is to let weapons armed with them observe, orient, decide, and act all without the need for humans. In a seminal book on lethal autonomous weapons, American defence strategist Paul Scharre describes this as taking people off the OODA loop. As he notes, while the United States officially does not subscribe to the idea of weapons deciding on what to hit, the research agencies in that country have continued to make significant progress on the issue of automated target recognition.
Other forces have not been as circumspect about deploying weapon-systems without humans playing a significant role in OODA loops. The Russian military has repeatedly claimed that it has the ability to deploy AI-based nuclear weapons. This have been interpreted to include cruise missiles with nuclear warheads moving more than five times the speed of sound.
How can India potentially leverage such intelligent weapons? Consider the issue of a nuclear counterforce strike against Pakistan where New Delhi destroys Rawalpindi's nukes before they can be used against Indian targets. While India's plans to do so are a subject of considerable analytical debate, one can — perhaps wildly — speculate about the following scenario.
Based on Pakistan's mountainous topography — including the Northern Highlands and the Balochistan plateau, it is quite likely that it will seek to conceal them there, inside cave-like structures or in hardened silos, in sites that are otherwise very hard to recognise. Machine-learning programmes dedicated to the task of image recognition from satellite surveillance data can improve India's ability to identify many more such sites than what is currently possible. This ability, coupled with precision-strike missiles, will vastly improve India's counterforce posture should it officially adopt one.
All this is not to say that the era of omniscient intelligent weapons is firmly upon us. Machine-learning algorithms for pattern recognition are still work-in-progress in many cases and far from being fool-proof. (For example, one such programme had considerable difficulty telling the difference between a turtle and a rifle.) But if current trends in the evolution of machine-learning continue, a whole new era of intelligent warfare may not be far.
No comments:
Post a Comment