Pages

15 September 2024

Four Fallacies of AI Cybersecurity

Chad Heitzenrater

As with many emerging technologies, the cybersecurity of AI systems has largely been treated as an afterthought. The lack of attention to this topic, coupled with increased realization of both the potential and perils of AI, has opened the door for the development of various AI cybersecurity models—many of which have emerged from outside the cybersecurity community. Absent active engagement, the AI community is now positioned to have to relearn many of the lessons that have been developed by software and security engineering over many years.

To date, the majority of AI cybersecurity efforts do not reflect the accumulated knowledge and modern approaches within cybersecurity, instead tending toward concepts that have been demonstrated time and again not to support desired cybersecurity outcomes. I'll use the term “fallacies” to describe four such categories of thought:

Cybersecurity is linear. The history of cybersecurity is littered with attempts to define standards of action. From the Orange Book (PDF) to the Common Criteria, pre-2010s security literature was dominated by attempts to define cybersecurity as an ever-increasing set of steps intended to counter an ever-increasing cyber threat. It never really worked. Setting compliance as a goal breeds complacence and undermines responsibility.


No comments:

Post a Comment