Zachary Szewczyk
In 2019, Rudy Guyonneau and Arnaud Le Dez captured a common fear in a Cyber Defense Review article titled “Artificial Intelligence in Digital Warfare.” “The question of AI now tends to manifest under the guise of a mythicized omniscience and therefore, of a mythicized omnipotence,” they wrote. “This can lead to paralysis of people fearful of having to fight against some super-enemy endowed with such an intelligence that it would leave us bereft of solutions.” With the release of ChatGPT in 2022, it looked like that fear had come true. And yet the reality is that AI’s use as an offensive tool has evolved incrementally and not yet created this super-enemy. Much of AI’s real value today lies in the defense.
As Microsoft and OpenAI recently explained, today we see threat actors using AI in interesting but not invincible ways. They found five hacker groups from four countries using AI. At first, the groups used large language models for research, translation, building tools, and writing phishing emails. Later, Microsoft saw the tools suggesting actions after a system had been hacked. Although some argue that modern models could take on more, that seems premature. In stark contrast to fear that AI would unleash a wave of robot hackers on the world, these actors used it for mundane tasks. Defensive cyber forces, on the other hand, could use AI technology that exists today to meaningfully improve cyber defenses in four key ways: accelerating the pace of analysis, improving warning intelligence, developing training programs more efficiently, and delivering more realistic training scenarios.
No comments:
Post a Comment