Zoe Stanley-Lockman
Since the U.S. Department of Defense adopted its five safe and ethical principles for AI in February 2020, the focus has shifted toward operationalizing them. Notably, implementation efforts led by the Joint Artificial Intelligence Center (JAIC) coalesce around “responsible AI” (RAI) as the framework for DOD, including for collaboration efforts with allies and partners.1
With a DOD RAI Strategy and Implementation Pathway in the making, the first step to leading global RAI in the military domain is understanding how other countries address such issues themselves. This report examines how key U.S. allies perceive AI ethics for defense.
Defense collaboration in AI builds on the broader U.S. strategic consensus that allies and partners offer comparative advantages relative to China and Russia, which often act alone, and that securing AI leadership is critical to maintaining the U.S. strategic position and technological edge. Partnering with other democratic countries therefore has implications for successfully achieving these strategic goals. Yet the military aspects of responsible AI that go beyond debates on autonomous weapons systems are currently under-discussed.
Responsible and ethical military AI between allies is important because policy alignment can improve interoperability in doctrine, procedures, legal frameworks, and technical implementation measures. Agreeing not only on human centricity for militaries adopting technology, but also on the ways that accountability and ethical principles enter into the design, development, deployment, and diffusion of AI helps reinforce strategic democratic advantages. Conversely, ethical gaps between allied militaries could have dangerous consequences that imperil both political cohesion and coalition success. More specifically, if allies do not agree on their responsibilities and risk analyses around military AI, then gaps could emerge in political willingness to share risk in coalition operations and authorization to operate alongside one another. Even though the United States is the only country to have adopted ethical principles for defense, key allies are formulating their own frameworks to account for ethical risks along the AI lifecycle. This report explores these various documents, which have thus far been understudied, at least in tandem. Overall, the analysis highlights both convergences in ethical approaches to military AI and burgeoning differences that could turn into political or operational liabilities.
The key takeaways are as follows:
DOD remains the leader in developing an approach to ethical AI for defense. This first-mover position situates the JAIC well to lead international engagements on responsible military AI.
Allies fall on a spectrum from articulated (France, Australia), to emerging (the U.K., Canada), to nascent (Germany, the Netherlands) views on ethical and responsible AI in defense. These are flexible categories that reflect the availability of public documents.
Multilateral institutions also influence how countries perceive and implement AI ethics in defense. NATO and JAIC’s AI Partnership for Defense (PfD) are important venues pursuing responsible military AI agendas, while the European Union and Five Eyes have relevant, but relatively less defined, roles.
Areas of convergence among allies’ views of ethics in military AI include the need to comply with existing ethical and legal frameworks, maintain human centricity, identify ethical risks in the design phase, and implement technical measures over the course of the AI lifecycle to mitigate that risk.
There are fewer areas of divergence, which primarily pertain to the ways that allies import select civilian components of AI accountability and trust into their defense frameworks. These should be tracked to ensure they do not imperil future political cohesion and coalition success.
Pathways for leveraging shared views and minimizing the possibility that divergence will cause problems include using multilateral formats to align views on ethics, safety, security, and normative aspects.
In analyzing allies’ approaches to responsible military AI, this issue brief identifies opportunities where DOD can encourage coherence by helping allied ministries formulate their views, and simultaneously learn from other approaches to responsible military AI as part of its own RAI implementation efforts.
No comments:
Post a Comment