29 March 2025

Crosswalk Analysis for Artificial Intelligence Frameworks

Heather West, Alice Hubbard & Samara Friedman

Introduction

Artificial Intelligence (AI) has become a priority for countries and organizations across the world — including ensuring its safety and security. Standards, frameworks, and guidelines around AI safety and security have been evolving rapidly within standards organizations to provide guidance and best practices to evaluate the risk in developing and using different AI models and systems. The maturity of these frameworks is due, in part, to their basis in software risk management frameworks in the cybersecurity space. Some frameworks are macro-level, focusing on governance, policy, and frontier AI risks. Others are much more granular, micro-level frameworks emphasizing the implementation of AI governance within specific organizations through technical standards, process controls, and risk management practices. Both are complementary, with the first setting the vision and the second enabling practical action. But with different organizations working on both macro- and micro-level frameworks, there needs to be alignment. This paper crosswalks some of the existing frameworks and makes recommendations on how additional ones should align.

In this analysis, we review six AI security and risk management frameworks that approach the safety and security of AI in different manners: 
1. The Bletchley Declaration 
2. The White House Voluntary Commitments and Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

No comments: