Sydney J. Freedberg Jr.
A landmark National Security Memorandum recently signed by President Joe Biden requires human oversight, safety testing and other safeguards for many military and intelligence applications of artificial intelligence. The memo also launches a sweeping review of how the Pentagon and intelligence agencies acquire AI, with recommendations for regulatory changes and other reforms due back next year.
However, neither the memo itself nor the accompanying Risk Management Framework [PDF] impose significant new restrictions on AI-controlled drones, munitions and other “autonomous weapons,” the chief concern of many arms control activists around the world. Instead, the RMF largely defers on that issue to existing Pentagon policy, DoD Directive 3000.09 [PDF], which was extensively revised last year to restrict, but not prohibit, autonomous weapons (some of which already exist in the form of computer-controlled anti-aircraft and missile defenses). The new policy documents, by contrast, focus on AI used to analyze information and make decisions — including about the use of lethal force.
That said, the memo does mention “a classified annex” that “addresses additional sensitive national security issues, including countering adversary use of AI that poses risks to United States national security.” The published documents do not specify what kind of “adversary use” the annex covers nor what other “sensitive” issues it might address.
No comments:
Post a Comment