14 October 2024

BEYOND BELIEF: THE IMPERATIVE TO DEVELOP EMPOWERED MILITARY AI

Andrew A. Hill & Stephen Gerras 

An empowered military AI (EMAI) that independently makes lethal decisions is scary. Ancient mythology is full of stories of creators being destroyed by their creations, as when the Olympian Gods overthrew the Titans. Killer machines are a mainstay of science fiction. Long before Michael Crichton’s Westworld and James Cameron’s Terminator, Samuel Butler’s 1872 novel Erewhon described an isolated civilization that had banned complex machines out of a fear that technology would someday supplant humankind. Butler quotes one Erewhonian philosopher, “I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something very different to what they are at present.”

Advanced AI seems to tap into some primal human fears. Risk expert David Ropeik highlights thirteen “fear factors” that lead humans to be more afraid of something, and advanced AI has eight of them: lack of control, trust, and choice; the fact it is man-made; its uncertainty; its potential for catastrophe; its novelty; and the personal risk it poses to us in potentially taking our jobs (or our lives). These factors make empowered AI particularly frightening, encouraging a denial of its possible implications. We want humans to perform better than machines, and we do not want machines to make life or death choices; but these are normative arguments, and wishful thinking should not masquerade as technological reality.


No comments: