Ingvild Bode
Introduction
The proliferation of AI technologies in military decision-making processes around targeting seems to be increasing. At first, the incorporation of AI in the military domain was predominantly examined in relation to weapon systems, frequently referred to as autonomous weapon systems (AWS) that can identify, track and attack targets without further human intervention (International Committee of the Red Cross [ICRC] 2021). Militaries worldwide already employ weapon systems, including some loitering munitions, which incorporate AI technologies to facilitate target recognition, generally depending on computer vision techniques (Boulanin and Verbruggen 2017; Bode and Watts 2023). Although usually operated with human approval, loitering munitions appear to have the potential to dynamically apply force without human intervention. Indeed, various reports from Russia’s war in Ukraine have indicated that the Ukrainian army uses loitering munitions that release force without human approval in the terminal stage of operation (Hambling 2023, 2024). These developments firmly underline longstanding and growing concerns about the extent to which the role that humans play in use-of-force decision making when using AI-based systems is diminishing.
However, weapon systems are just one of numerous areas of application where AI is used in the military setting. AI technologies are typically considered to improve the effective and rapid analysis of vast quantities of data, making them an appealing choice for a range of military decision-making tasks pertaining to varying levels of risk, such as logistics, recruitment, intelligence and targeting (Grand-Clément 2023). In the military domain, such systems are commonly referred to as AIbased decision support systems (DSS) that “assist decision-makers situated at different levels in the chain of command to solve semi-structured and unstructured decision tasks” (Susnea 2012, 132–33; Nadibaidze, Bode and Zhang 2024).
No comments:
Post a Comment