MEGHA PARDHI
China recently submitted a position paper on regulating the military applications of artificial intelligence to the sixth review conference of the United Nations Convention on Certain Conventional Weapons (CCW).
The takeaway from this position paper is that countries should debate, discuss, and perhaps eschew the weaponization of AI. By initiating a discussion on regulating military applications of AI, Beijing wants to project itself as a responsible international player.
This proposal is Beijing’s formal acknowledgment of AI as a technology capable of transforming the international security paradigm. Many countries, including the US and China, are trying to leverage the advantages of AI in military applications. According to some reports, China might even be ahead of the US in integrating AI applications for military purposes.
Why propose regulations?
China’s position paper has standard jargon like “ethics,” “governance,” “world peace and development,” “multilateralism,” and “openness and inclusiveness.” These keywords are used to portray China as a responsible country.
Additionally, for technological security, the position paper emphasizes the centrality of human intervention and data security, along with a restriction on the military use of AI data.
However, for dual-use technology like AI, a clear distinction in the civil or military application of data might be difficult. For example, civilian data can be used to train an AI model, and this trained model can then be used for military purposes.
The move to propose regulations on the weaponization of AI could also mean the People’s Liberation Army has achieved a desired level of sophistication in AI, although it is unlikely. Or it could mean the PLA plans to achieve a certain level of sophistication by the time discussion on AI reaches a consensus.
Beijing is likely talking about regulation out of fear either that it cannot catch up with others or that it is not confident of its capabilities. Meanwhile, formulating a few commonly agreeable rules on weaponization of AI would be prudent.
Beijing knows that even if the debate on weaponization of AI begins now, it will take significant time to bring out regulations, since the positions of each country will differ.
China’s position paper has many caveats that make it difficult to formulate effective regulations. For example, AI is not a single technology. The term “artificial intelligence” is collectively used for a range of specialized applications and algorithms where a machine performs tasks that typically need human intelligence.
Many such applications overlap with civilian applications. For example, your customer-care chatbot is an AI or algorithm that recommends what movie you should watch is also an AI.
The distinction between strict military applications and civilian applications is also blurred. For example, the face-detection technology used for security checkpoints can also be used to eliminate key leaders during war.
Hence blanket regulation on AI is not possible, and the caveats in China’s proposal might in effect render it meaningless.
By initiating discussion on military applications of AI, China has taken the lead in shaping the discussion on the security implications of this critical technology. China sees itself as an important norm-setting actor, which this attempt reflects.
However, AI is important for the PLA’s vision of future warfare, and hence any talk of its regulation by Beijing should be taken with a grain of salt. For strategic security, China calls the major countries to be “prudent and responsible” while developing and applying AI in the military and not seeking absolute military advantage.
PLA leaders place AI applications at the core of intelligentized warfare. For example, General Lin Jianchao, former director of the PLA’s General Staff Department Office, highlighted in 2016 that AI could have revolutionary implications for the military.
AI continues to be central in the literature surrounding military modernization and the future of warfare. Chinese scholars and PLA officers consider AI a starting point to build strategic offensive and defensive capabilities necessary to win future wars.
Terminology of nuclear weapons
Two terminologies used in the proposal are similar to those used for nuclear weapons. First is use of the term “proliferation.”
China’s arms-control ambassador to the United Nations, Li Song, said the oversight proposed by Beijing is necessary to cut the risk of military AI proliferation. The term “proliferation” is mostly used about nuclear weapons and weapons of mass destruction. Using the term about AI indicates that China views AI as a weapon of mass destruction. Hence, if this discussion goes forward, a few countries will dictate global norms on the weaponization of AI, in the same way as nuclear weapons.
The second term points to a “no first use” policy. Beijing urged in the position paper that countries need to remember that military applications of AI should never be used as tools to start a war or pursue hegemony. These words sound similar to the “no first use” (NFU) doctrine.
Most of the current applications of AI are in decision-making, battlefield simulations, increasing precision, reducing reaction time, etc. These applications cannot typically start a war. However, the possibility of miscalculation or a misfire from a fully autonomous system can never be refuted.
China’s 2018 position paper at the CCW attempts to define the application of AI for lethal autonomous weapons systems (LAWS). In some of those cases, the NFU doctrine might be applicable. For example, LAWS might evolve thanks to interaction with the environment and learn autonomously to expand its functions and capabilities that exceed human expectations.
Still, the nuances of defining the use of AI in military applications are very complex. Hence the promise of NFU might not be credible.
No comments:
Post a Comment