2 August 2024

NIST releases new tool to check AI models’ security

Anirban Ghoshal

The US Department of Commerce’s National Institute of Standards and Technology (NIST) has released a new open source software package, Dioptra, that allows developers to determine what type of attacks would make an AI model perform less effectively.

“Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra, a new software package aimed at helping AI developers and customers determine how well their AI software stands up to a variety of adversarial attacks,” the NIST said in a statement.

The software package, available for free download, can also help developers of AI systems quantify the performance reduction of a model so that they can learn how often and under what circumstances the system would fail, the NIST explained.

The release of Dioptra is linked to President Biden’s executive order passed in 2023 that required the NIST to help with model testing.

Along with the new software package, the NIST has also released several documents promoting AI safety and standards in line with the executive order.

No comments: