Branka Marijan
Introduction
In international discussions on responsible military use of AI, transparency is frequently emphasized. Transparency is also a central concern across AI ethical principles in civilian contexts (Jobin, Ienca and Vayena 2019). However, the conceptualization of transparency varies considerably. For some governments, transparency entails some disclosure of information regarding the testing, evaluation and functioning of various systems by states. For others, it means that military AI systems must be sufficiently transparent to their own militaries and ensure that commanders understand their operations and can intervene when these systems produce errors or unpredictable outputs. In this way, the understanding of transparency is generally one of “the understandability and predictability of systems” (Endsley, Bolte and Jones 2003, 146; National Academies of Sciences, Engineering, and Medicine 2022). However, the challenge remains that these varying interpretations of transparency will become even more significant as states begin operationalizing responsible AI principles. These principles will be especially important for ensuring the responsible use of AI and autonomous systems by military forces.
Already in practice in contemporary conflict zones such as Ukraine and Gaza, commitments to having military commanders understand AI systems are being challenged due to the nature of the technology, the use of off-the-shelf technologies and the lack of clear guidelines regarding the extent to which such understanding is required. There is also a broader lack of disclosure about the types and sophistication of AI-enabled systems being used and how they function. Notably, the AI target generation and decision support systems used by the Israel Defense Forces (IDF) in Gaza have raised concerns as investigative reports publicized their use, leading to more questions about their function (Abraham 2024; Davies, McKernan and Sabbagh 2023).
No comments:
Post a Comment