Filippa Lentzos, Jez Littlewood, Hailey Wingo & Alberto Muti
“Can chatbots help you build a bioweapon?” a headline in Foreign Policy asked. “ChatGPT could make bioterrorism horrifyingly easy,” a Vox article warned. “A.I. may save us or construct viruses to kill us,” a New York Times opinion piece argued. A glance at many headlines around artificial intelligence (AI) and bioweapons leaves the impression of a technology that is putting sophisticated biological weapons within the reach of any malicious actor intent on causing such harm with disease.
Like other scientific and technological developments before it, AI is dual use: It has the potential to deliver a range of positive outcomes as well as to be used to support nefarious activity by malign actors. And, as with developments ranging from genetic engineering to gene synthesis technologies, AI in its current configurations is unlikely to result in the worst-case scenarios suggested in these and other headlines—an increase in the use of biological weapons in the next few years.
Bioweapons use and bioterrorism has been, historically, extremely rare. This is not a reason to ignore AI or be sanguine about the risks it poses, but managing those risks is rarely aided by hype.
No comments:
Post a Comment