GIACOMO PERSI PAOLI & YASMIN AFINA
Until January 2023, multilateral discussions on AI in the military domain were confined to the remit of the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) of the High Contracting Parties [to the Convention on Certain Conventional Weapons] related to emerging technologies in the area of LAWS. In this context, AI has been discussed as a technology that could enable advanced levels of autonomy in weapons systems. These technological advances bring to the fore a host of legal and policy challenges, both pre-existing and novel, including compliance with international humanitarian law and international human rights law, ethical considerations, and wider policy questions.
A key instrument of international humanitarian law, the CCW was designed to ban or restrict the use of specific types of weapons which may be deemed to be excessively injurious, or to have indiscriminate effects.4 As such, most of the discussions related to AI occurring within the general framework of the CCW and the specific context of the GGE on LAWS focused on use of these systems in military targeting, with an emphasis on legal compliance.
The use of AI as an enabler for more advanced and sophisticated levels of autonomy in weapons systems is certainly a very important issue, but it only represents a very small portion of the range of possible military applications of this technology.5 The potentially transformative effect of AI on all aspects of society, including national security and defence, has become a mainstream topic of discussion among policymakers and the general public alike, particularly following the public release of ChatGPT in late 2022.
No comments:
Post a Comment