Pages

21 November 2021

Managing the Cybersecurity Vulnerabilities of Artificial Intelligence

Jim Dempsey

Last week, Andy Grotto and I published a new working paper on policy responses to the risk that artificial intelligence (AI) systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attack. As the National Security Commission on Artificial Intelligence found, “While we are on the front edge of this phenomenon, commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploiting traditional software flaws to deceive, manipulate, compromise, and render AI systems ineffective.”

The demonstrations of vulnerability are remarkable: In the speech recognition domain, research has shown it is possible to generate audio that sounds like speech to ML algorithms but not to humans. There are multiple examples of tricking image recognition systems to misidentify objects using perturbations that are imperceptible to humans, including in safety critical contexts (such as road signs). One team of researchers fooled three different deep neural networks by changing just one pixel per image. Attacks can be successful even when an adversary has no access to either the model or the data used to train it. Perhaps scariest of all: An exploit developed on one AI model may work across multiple models.

As AI becomes woven into commercial and governmental functions, the consequences of the technology’s fragility are momentous. As Lt. Gen. Mary O’Brien, the Air Force’s deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects operations, said recently, “if our adversary injects uncertainty into any part of that [AI-based] process, we’re kind of dead in the water on what we wanted the AI to do for us.”

Research is underway to develop more robust AI systems, but there is no silver bullet. The effort to build more resilient AI-based systems involves many strategies, both technological and political, and may require deciding not to deploy AI at all in a highly risky context.

In assembling a toolkit to deal with AI vulnerabilities, insights and approaches may be derived from the field of cybersecurity. Indeed, vulnerabilities in AI-enabled information systems are, in key ways, a subset of cyber vulnerabilities. After all, AI models are software programs.

Consequently, policies and programs to improve cybersecurity should expressly address the unique vulnerabilities of AI-based systems; policies and structures for AI governance should expressly include a cybersecurity component.

As a start, the set of cybersecurity practices related to vulnerability disclosure and management can contribute to AI security. Vulnerability disclosure refers to the techniques and policies for researchers (including independent security researchers) to discover cybersecurity vulnerabilities in products and to report those to product developers or vendors and for the developers or vendors to receive such vulnerability reports. Disclosure is the first step in vulnerability management: a process of prioritized analysis, verification, and remediation or mitigation.

While initially controversial, vulnerability disclosure programs are now widespread in the private sector; within the federal government, the Cybersecurity and Infrastructure Security Agency (CISA) has issued a binding directive making them mandatory. In the cybersecurity field at large, there is a vibrant—and at times turbulent—ecosystem of white and gray hat hackers; bug bounty program service providers; responsible disclosure frameworks and initiatives; software and hardware vendors; academic researchers; and government initiatives aimed at vulnerability disclosure and management. AI/ML-based systems should be mainstreamed as part of that ecosystem.

In considering how to fit AI security into vulnerability management and broader cybersecurity policies, programs and initiatives, there is a dilemma: On the one hand, AI vulnerability should already fit within these practices and policies. As Grotto, Gregory Falco and Iliana Maifeld-Carucci​ argued in comments on the risk management framework for AI drafted by the National Institute of Standards and Technology (NIST), AI issues should not be siloed off into separate policy verticals. AI risks should be seen as extensions of risks associated with non-AI digital technologies unless proven otherwise, and measures to address AI-related challenges should be framed as extensions of work to manage other digital risks.

On the other hand, for too long AI has been treated as falling outside existing legal frameworks. If AI is not specifically called out in vulnerability disclosure and management initiatives and other cybersecurity activities, many may not realize that it is included.

To overcome this dilemma, we argue that AI should be assumed to be encompassed in existing vulnerability disclosure policies and developing cybersecurity measures, but we also recommend, in the short run at least, that existing cybersecurity policies and initiatives be amended or interpreted to specifically encompass the vulnerabilities of AI-based systems and their components. Ultimately, policymakers and IT developers alike will see AI models as another type of software, subject as all software is to vulnerabilities and deserving of co-equal attention in cybersecurity efforts. Until we get there, however, some explicit acknowledgement of AI in cybersecurity policies and initiatives is warranted.

In the urgent federal effort to improve cybersecurity, there are many moving pieces relevant to AI. For example, CISA could state that its binding directive on vulnerability disclosure encompasses AI-based systems. President Biden’s executive order on improving the nation’s cybersecurity directs NIST to develop guidance for the federal government’s software supply chain and specifically says such guidance shall include standards or criteria regarding vulnerability disclosure. That guidance, too, should reference AI, as should the contract language that will be developed under section 4(n) of the executive order for government procurements of software. Likewise, efforts to develop essential elements for a Software Bill of Materials (SBOM), on which NIST took the first step in July, should evolve to address AI systems. And the Office of Management and Budget (OMB) should follow through on the December 2020 executive order issued by former President Trump on promoting the use of trustworthy artificial intelligence in the federal government, which required agencies to identify and assess their uses of AI and to supersede, disengage or deactivate any existing applications of AI that are not secure and reliable.

AI is late to the cybersecurity party, but hopefully lost ground can be made up quickly.

No comments:

Post a Comment