14 February 2025

What Google’s return to defense AI means

PATRICK TUCKER

Google has discarded its self-imposed ban on using AI in weapons, a step that simultaneously drew praise and criticism, marked a new entrant in a hot field, and underscored how the Pentagon—not any single company—must act as the primary regulator on how the U.S. military uses AI in combat.

On Tuesday, Google defended its decision to strip its AI-ethics principles of a 2018 prohibition against using AI in ways that might cause harm.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” it reads.

The move is a long-overdue correction to an overcorrection, one person familiar with the company’s decision-making process told Defense One.

That “overcorrection” was Google’s 2018 decision not to renew its contract to work on the AIr Force’s Maven project. At the time, Maven was the Pentagon’s flagship AI effort: a tool that vastly reduced the time needed to find useful intelligence in hours and hours of drone-video footage. Within defense circles, the program wasn’t controversial at all. Military officials describing the program always said Maven’s primary purpose was to enable human operators, especially in performing time-sensitive tasks under enormous cognitive burdens to understand large data volumes. Many praised the effort as pointing the way toward other AI-powered decision aids.

No comments: