Li Ang Zhang, Gavin S. Hartnett, Jair Aguirre
A large body of academic literature describes myriad attack vectors and suggests that most of the U.S. Department of Defense's (DoD's) artificial intelligence (AI) systems are in constant peril. However, RAND researchers investigated adversarial attacks designed to hide objects (causing algorithmic false negatives) and found that many attacks are operationally infeasible to design and deploy because of high knowledge requirements and impractical attack vectors. As the researchers discuss in this report, there are tried-and-true nonadversarial techniques that can be less expensive, more practical, and often more effective. Thus, adversarial attacks against AI pose less risk to DoD applications than academic research currently implies. Nevertheless, well-designed AI systems, as well as mitigation strategies, can further weaken the risks of such attacks.
No comments:
Post a Comment