Gregory C. Allen
In March, WIRED ran a story with the headline “Russia's Killer Drone in Ukraine Raises Fears About AI in Warfare,” with the subtitle, “The maker of the lethal drone claims that it can identify targets using artificial intelligence.” The story focused on the KUB-BLA, a small kamikaze drone aircraft that smashes itself into enemy targets and detonates an onboard explosive. The KUB-BLA is made by ZALA Aero, a subsidiary of the Russian weapons manufacturer Kalashnikov (best known as the maker of the AK-47), which itself is partly owned by Rostec, a part of Russia’s government-owned defense-industrial complex.
The WIRED story understandably attracted a lot of attention, but those who only read the sensational headline missed the article’s critical caveat: “It is unclear if the drone may have been operated in this [an AI-enabled autonomous] way in Ukraine.” Other outlets re-reported the WIRED story, but irresponsibly did so without the caveat.
WIRED’s assessment that Kalashnikov claims the KUB-BLA “boasts the ability to identify targets using artificial intelligence” is based on two main pieces of evidence: a Kalashnikov press release about ZALA Aero’s “Artificial Intelligence Visual Identification (AIVI)” capabilities for its unmanned aircraft, and the original Kalashnikov press release announcing the KUB-BLA in 2019.
However, these two pieces of evidence are less than they seem.
The Russian-language AIVI press release never mentions the KUB-BLA or military applications. Instead, it describes a ZALA Aero machine-learning AI drone product line that is marketed to industrial and agricultural sectors. Incorporating modern machine-learning AI into military applications is significantly more difficult than in industrial or agricultural applications. Modern machine-learning AI using deep neural networks offers the opportunity for incredible gains in performance, but that performance depends on having lots of training data during development. Moreover, that training data needs to closely resemble operational conditions.
In general, it is much easier to get such training data from commercial customers than from an enemy military, especially if friendly weapons systems and sensors do not often come within range of enemy ones. The most mature military AI applications are ones like satellite reconnaissance: even in peacetime, satellites get to take a lot of pictures of Russian and Chinese military forces, and those pictures can be digitally labeled by human experts to turn them into training data. Training data is what machine learning AI systems learn from. The combination of a learning algorithm and training data is how AI systems learn to recognize what is in the image. But training data is generally application-specific. Satellite image recognition training data only helps build satellite image recognition AI. One cannot magically use labeled satellite image data to train an AI for a robotic drone’s targeting computer (at least not with today’s technology).
Getting enough of the right sort of training data to incorporate modern AI into, say, a robotic tank’s targeting computer, is a much tougher technical challenge. It is not impossible in principle, but in practice, there are far fewer opportunities to collect the right sort of training data.
This is not to say Russia has not tried. In the past, Rostec and Kalashnikov executives have not been shy about their attempts to develop weapons that successfully combine modern AI and combat autonomy, so it would be odd if they had succeeded in doing so with the KUB-BLA and not disclosed it in their marketing materials. Kalashnikov has been heavily promoting the KUB-BLA for both Russian and international customers.
What does Kalashnikov say about the KUB-BLA specifically? The 2019 KUB-BLA announcement states that the system has two means of delivering the drone and its explosive warhead to target coordinates: “The target coordinates are specified manually or acquired from [the sensor] payload targeting image.”
The vague latter description is what led many to assume KUB-BLA was using AI. However, “payload targeting image” is consistent with how many other precision-guided munitions and drone loitering munitions work, including ones that do not use any advanced AI capabilities. A Rostec executive specifically described the KUB-BLA as a Russian “domestic analogue” to the Israeli-built Orbiter 1K drone, which looks nearly identical. The Orbiter 1K comes with a ground control station where human operators monitor the video coming from the drone’s sensor and select targets directly from the video feed.
In other words, a human has already selected the target prior to the drone attacking it, and the drone is only autonomously maintaining target lock and navigating to the target, not autonomously selecting and deciding to engage targets. Autonomy over decisions to “select and engage targets” is the specific standard in U.S. Department of Defense policy as to what qualifies as an “autonomous weapon system.” Fire and forget munitions—which is the standard term used to describe not only the Orbiter 1K, but also heat-seeking missiles like the Javelin and Stinger—do not qualify as autonomous weapons. Those heat-seeking missiles do not use modern deep neural network machine learning, but they do use thermal image processing algorithms that were once considered state of the art. They, and many more systems like them, have been in use for decades by dozens of militaries around the world.
It is possible that the KUB-BLA received some kind of lethal AI-targeting upgrade prior to being used in Ukraine, but that is doubtful. Neither WIRED nor anyone else has provided evidence that this is the case. If the weapon did have those capabilities, it is unlikely that Kalashnikov would fail to mention them. The company has bragged about KUB-BLA’s recent use in Ukraine, and in the past Kalashnikov has openly talked about seeking to develop a “fully automated combat module” based on AI deep neural network technology.
In sum, there is little reason to believe that Russia is using AI-enabled autonomous weapons in Ukraine, yet. That is the good news. The bad news is that, if Russia’s unlawful war in Ukraine drags on, Russia has the intent and likely has the means to deploy autonomous weapons, with or without advanced AI.
Regarding means, a recent report by Russian news outlet RIA Novosti interviewed an unnamed Russian military source that is worth quoting (via Microsoft’s automatic translation) at length:
Russian reconnaissance and reconnaissance-strike UAVs will receive a digital catalog with electronic [optical and infrared] images of military equipment adopted in NATO countries. This will allow them to automatically identify it on the battlefield and create a map of the location of enemy positions directly onboard the device, which will be broadcast to the command post. . . . It is formed due to neural network training algorithms, which makes it possible to accurately determine the samples of equipment in a wide variety of environmental conditions, including with a short exposure (the technique is visible for several seconds or less), as well as when only part of the sample falls into the field of view of the drone—when, for example, only part of any combat vehicle is visible from cover.
As mentioned above, collecting adequate training data remains a significant hurdle for many military AI development projects. While the invasion of Ukraine has been a disaster in many ways for the Russian military, NATO has provided weapons and equipment to Ukraine that offers the best opportunity yet to collect operational training data for new AI models and more diverse military AI applications. The anonymous quote suggests that Russia’s military is taking this opportunity seriously.
Of course, domestic opposition to the war has caused an exodus of tech workers from Russia, and the sanctions levied against Russia have left it with major shortages of the semiconductor chips needed to make advanced AI systems. These are major challenges, but advanced AI is not required to endow weapons with all types of lethal autonomous capabilities, only a willingness to delegate decisions and freedom to military machines. The Israeli-built Harpy autonomous weapon, which can loiter in the air over a battlefield for hours while searching for enemy radar emissions to attack, dates back to the late 1980s. In addition to the KUB-BLA, Kalashnikov makes another drone called the Lancet, which the aforementioned Rostec executive describes as a Russian analogue to the more modern Harpy-2 (aka Harop). Kalashnikov claims the following about the Lancet:
[The Lancet] is a smart multipurpose weapon, capable of autonomously finding and hitting a target. The weapon system consists of precision strike component, reconnaissance, navigation and communications modules. It creates its own navigation field and does not require ground or sea-based infrastructure.
Kalashnikov is clearly marketing the Lancet as a capable autonomous weapon but also one that can be remotely controlled depending on user preference. The Russian military has already used the Lancet for combat operations in Syria in its remotely controlled mode, but observers have not yet confirmed that the Lancet is being used in Ukraine. Once it shows up, Russia will likely be tempted to turn on Lancet’s autonomous weapon functionality—that is, if the system’s performance matches Kalashnikov’s advertising.
Remotely piloted drones have demonstrated effectiveness in the war in Ukraine that exceeded most analysts’ expectations before the war, when drones were often viewed as useful in counterinsurgency operations, but not a high-end conflict against a technologically sophisticated adversary like Russia. In military technology competition, however, each successful move leads to a countermove. Many analysts feel that the weak point in current drone weapons is their reliance on a high-bandwidth communications link to their human remote controllers. If the war in Ukraine drags on for many more months or years, expect both sides to more widely deploy jammers and other electronic warfare systems to counter drones. Elimination or reduction of remote piloting options will naturally lead Russia to desire greater autonomy in their drone weapons.
Regarding intent, Russia has consistently played the role of stubborn obstructionist through years of UN expert discussions about developing new international norms or codes of conduct for the development and use of autonomous weapons. Most countries around the world are being cautious with the introduction of military AI. Much of the AI ethics and autonomous weapons debate has been heavily focused on whether or not using AI increases the risk of technical accidents and harm to civilians. But Russia’s unprovoked war in Ukraine is a tragic reminder that, while unintentional harm to civilians is a real tragedy, there is also the unsolved problem of intentional harm to civilians. The Russian military has routinely attacked not only residential neighborhoods, but hospitals and humanitarian organizations.
Finally, Russia’s human soldiers in Ukraine have suffered heavy losses and reportedly deserted in large numbers. Faced with such frustrations, there’s little reason to doubt Russian president Vladimir Putin would use lethal autonomous weapons if he thought it would provide a military edge.
The United States and its allies need to start thinking about how to ensure that he does not.
No comments:
Post a Comment