Alex Barker
Introduction
The People’s Republic of China (PRC) has identified artificial intelligence (AI) as an economic and security developmental priority. The State Council’s National AI Development Plan, released in July of 2017, calls AI “the new focus of world competition” and the 14th Five Year Plan, adopted in March 2021, promotes the “deep integration of internet, big data and artificial intelligence in industries” (State Council, July 8, 2017; Xinhua, November 3, 2020). China’s emphasis on AI can be considered a “whole of government” approach, which has important ramifications for the People’s Liberation Army (PLA). Specifically, China’s “military-civil fusion” (军民融合, jun min ronghe) strategy is intended to facilitate transfers of technology and expertise between the commercial and military sectors, including in the field of AI.
This article examines writings by PLA-affiliated authors and private sector researchers leveraging open-source research—much of which is developed in the U.S.—to improve China’s automatic military target recognition capabilities. Sources were drawn from the Chinese National Knowledge Infrastructure (CNKI) and ultimately focused on 16 research papers. Where possible, articles were chosen based on their number of citations, though the slow pace of the academic publishing cycle means that many valuable recent articles have yet to be cited.
The PLA’s Interest in AI
Since the early 2000s, PLA doctrine has focused on enabling “informationized” (信息化, xinxi hua) warfare, a model of network-centric operations derived from the U.S. military. Over the past five years, PLA writings have progressively discussed the idea that military AI, or “intelligentization” (智能化, zhineng hua) is “the development and inheritance of military informationization” and the likely form of future warfare.[1] In his 2017 report to the 19th Party Congress, Xi Jinping called on the military to “prepare for war” by, among other things, “accelerating the process of military intelligentization” and “improving the ability for joint operations based on internet and information systems.”[2]
One of the most active fields of PLA AI research is applying machine learning to computer vision—teaching computers how to interpret the visual world (CSET, March 2021). Machine learning uses advanced pattern recognition in which an algorithm draws inferences from large datasets, improving its ability to do so with increased exposure to data. Deep learning (深度学习, shendu xuexi) is a form of “unsupervised” machine learning that allows a program to use layered algorithms, called a neural network, to train itself on how to make conclusions from datasets. For image recognition, software engineers allow programs to train on a set of sample images that include the target object.
According to a rough count of publicly available Chinese journal publications, papers on deep learning for military image recognition, usually called “military target recognition” (军事目标识别, junshi mubiao zhi bie) increased by an average of 20 percent from 2016 to 2019.[3] Although many of these papers assess the current state of research and do not themselves add substantively to the field, the number reflects the high attention that the field receives in PLA circles.
PLA officers and academic researchers alike are bullish on deep learning’s value for military applications. According to researchers at the PLA Army Academy of Armored Forces, deep learning offers advantages over traditional machine learning because it avoids the need for manual extraction of target features in the training dataset, an onerous chore when accurate programs demand tens of thousands of image samples.[4]
Many PLA researchers explicitly state that their research goal is to aid the development of intelligent precision guided munitions (PGMs). Authors repeatedly stress the need to “install ‘eyes’ and a ‘brain’ in weapons,” or “to give weapons a human-like ability to recognize military targets.”[5] In addition to developing guidance systems, some researchers also state that computer vision could be useful in processing satellite or other reconnaissance imagery. The PLA believes it is not alone in weaponizing AI. Researchers frequently compare their own results to perceived U.S. progress in AI. One highly cited paper from an author at a state-owned enterprise in the technology sector argues that the U.S. “sees advanced missiles and AI as key to dealing with anti-access, area-denial threats.”[6]
Does AI Image Recognition Work?
Image recognition for military applications requires a higher degree of accuracy for target detection than civilian applications of image processing, but early research is promising.[7] While the deep learning tests described in the literature can correctly identify military equipment more than 85 percent of the time, PLA authors stress the need for further development to improve accuracy.
PLA research on deep learning applications for image recognition broadly fall into two categories: object detection and classification. Object recognition algorithms can detect things that exist in an image but cannot properly classify them without the aid of another trained neural network. PRC literature emphasizes several open-source object detection platforms, all of which were originally developed for civilian applications. In the last two years, Single Shot Detector 300 (SSD300) and “You Only Look Once” (YOLO) have become commonly cited algorithms. Both can detect multiple object boxes within a single image frame and can process at high framerates, which is necessary for real time applications (Jonathan Hui, March 13, 2018).[8] In PLA tests, these algorithms can accurately detect military objects with greater than 80 percent accuracy, though the accuracy for small targets or “dense” targets—objects clustered closely together—is lower. Older PLA research also mentions other algorithms, including variations on Convolutional Neural Networks (CNNs) such as Faster RCNN or R-FCN. Many of these other algorithms suffer from an inability to process images in real time but could be of value for tasks where high framerate is not a requirement (towardsdatascience.com, August 3, 2018).
PLA researchers must design their own sets of training data to develop object classifiers and correctly categorize objects like tanks and fighter aircraft in both live video and still imagery. To do this, they use a variety of algorithms including CNNs and supervised learning processes such as Support Vector Machines (SVMs). One paper attempting to develop image processing for cruise missile ground target detection restricts the training database to aerial views of military vehicles.[9] Another research group attempting to parse “long range reconnaissance data” uses satellite and aerial imagery to train a neural network to correctly identify U.S. military aircraft in parked positions on runways alongside civilian aircraft, achieving an accuracy of roughly 92 percent.[10] A not-insignificant body of research focuses entirely on detecting naval targets, a field described as being an excellent fit for image recognition studies due to the challenges associated with infrared and other sensors in maritime conditions.[11]
Finally, some PLA researchers are attempting to classify objects by “threat.” A dissertation from 2020 uses a CNN to sort image objects by type, distance, mobility and “attack” factors, creating a threat categorization that weights certain types of military platforms over others, depending on situational guidelines. This algorithm correctly identified threats with an average accuracy in the low 90s.[12]
Challenges for Future Development
Throughout the literature, authors are quick to moderate expectations, often referring to the fact that combat conditions introduce challenges that may not be reflected in their tests. Deep learning is still considered an “immature” technology by many researchers, though one with strong potential for further development. Research in the field is expensive, however, due to the high cost of acquiring graphical processing units (GPUs) necessary for computer vision computation.
Another issue is the PLA’s reliance on outside sources for image recognition algorithms. Both SSD300 and YOLO are open source, as are most CNN algorithms referenced by researchers. The codebase of SSD300 is developed and maintained by the NVIDIA corporation, while YOLO is developed by Joseph Redmon, a computer science graduate student at the University of Washington in Seattle.
At least some authors note that they have difficulty acquiring enough samples to develop satisfactory models for target recognition, especially when it comes to image classification. Analysts at the PLA’s Naval Research Academy note the issue of sample quality is a problem given deep learning models must be robust enough to function under “uncertain information conditions.”[13] Images of military equipment, vehicles, or personnel, especially in realistic settings, are far less available on the internet than comparable civilian objects. Many CNNs already have difficulty recognizing small targets or long-range targets, something that is exacerbated by the lack of availability of these types of military images.[14]
Building a database of sample images is particularly challenging when using “supervised learning”—in which developers must classify the object of interest in each sample for an algorithm—instead of deep learning. An MA student in Military Engineering using YOLO v3 notes that a military internal training database of images does not provide “enough quantity and types of targets,” and necessitates supplementary images gathered from open sources and painstakingly annotated. The same author suggests that additional work be carried out on unsupervised learning for target recognition, underscoring the PLA’s strong interest in deep learning.[15]
Conclusion
The PLA sees significant military applications for AI and is actively developing “intelligentized” weapons designed to detect and attack U.S. aircraft, ships, and armored vehicles. While this article focuses on the PLA’s development of computer vision, Chinese researchers are also employing deep learning in adjacent fields such as in-flight course correction for cruise missiles.
The PLA’s progress in adapting deep learning for image recognition is an example of the need for U.S. policymakers and developers alike to consider how open-source technology can be utilized by opponents. Indeed, development of YOLO is partially funded by the Office of Naval Research, a detail that may have drawn the attention of Chinese researchers interested in emulating U.S. advancements in AI. Reliance upon open architecture is a fundamental tenet of the American innovation ecosystem and should not be restricted to compete with the PRC. Rather, the defense community should see this as an advantage. Knowledge of which algorithms the PLA will rely upon allows the United States to determine possible weaknesses, exploit opportunities for data poisoning, and train against these algorithms in exercises.
Moreover, the challenges reported in gathering sufficient samples for image classification underscore the need to consider how military systems imagery in the public domain will assist the PLA. While the PLA will undoubtedly train neural networks off its own classified imagery, commercial satellite and UAV imagery will likely assist this effort, no matter whether it is first acquired by the PRC’s military or private sector. Algorithms can become accurate by training off civilian imagery, then be turned over to the PLA for further development. A strong case is emerging for restricting export of imagery in the same way as critical dual-use technologies.
The state of public access literature suggests that significant obstacles remain before the PLA will be capable of using AI as a guidance system for PGMs. But the direction of research is disturbing given that tests already indicate a high degree of accuracy, which is enough to aid imagery analysts and ensure continued PLA attempts at weaponization. In the near future, an emerging threat is the PLA’s potential to field “fire and forget” PGMs that can independently assess targets, especially if linked with the ability to loiter until targets become available. This type of stand-off munition would enhance the PLA’s already formidable combat capabilities in the Western Pacific.
Finally, this study of open-source literature suggests that the conversations surrounding AI’s use in the military are far different in China than in the United States. The PLA appears to adopt an approach of rapidly “failing forward” in attempts to exploit AI regardless of its maturity, in part motivated by a perception that the U.S. military still retains a large technological lead over it. Even if Americans are unwilling to use AI in weapons, policymakers and engineers should not expect that the same legal and ethical limitations on AI that apply in the U.S. will guide its development in the PRC.
No comments:
Post a Comment