Pages

2 April 2023

National Security AI and the Hurdles to International Regulation

Ashley Deeks

States are increasingly turning to artificial intelligence systems to enhance their national security decision-making. The real risks that states will deploy unlawful or unreliable national security AI (NSAI) make international regulations seem appealing, but approaches built on nuclear analogies are deeply flawed. Instead, and as I argue in this paper, regulation of NSAI is more likely to follow the path of hostile cyber operations (HCOs).

Efforts to develop new cyber norms teach us that reaching global agreement about what types and uses of NSAI are acceptable will be very difficult, absent an international crisis. Modest transnational work can be done in other ways, though, including in discussions among close allies. However, much of the work in establishing norms for the use of NSAI will, at least in the near term, take place domestically. In fact, for both HCOs and NSAI, there is likely to be a reduced emphasis on securing binding agreement about legal norms; instead, small groups of like-minded states will simply focus on developing their tools in a way that comports with their own values, while using levers such as espionage, covert action, sanctions, and criminal prosecution to slow and contest their adversaries’ perceived misuse of those tools.

No comments:

Post a Comment