Pages

22 October 2024

A new military-industrial complex: How tech bros are hyping AI’s role in war

Paul Lushenko & Keith Carter

Since the emergence of generative artificial intelligence, scholars have speculated about the technology’s implications for the character, if not nature, of war. The promise of AI on battlefields and in war rooms has beguiled scholars. They characterize AI as “game-changing,” “revolutionary,” and “perilous,” especially given the potential of great power war involving the United States and China or Russia. In the context of great power war, where adversaries have parity of military capabilities, scholars claim that AI is the sine qua non, absolutely required for victory. This assessment is predicated on the presumed implications of AI for the “sensor-to-shooter” timeline, which refers to the interval of time between acquiring and prosecuting a target. By adopting AI, or so the argument goes, militaries can reduce the sensor-to-shooter timeline and maintain lethal overmatch against peer adversaries.

Although understandable, this line of reasoning may be misleading for military modernization, readiness, and operations. While experts caution that militaries are confronting a “eureka” or “Oppenheimer” moment, harkening back to the development of the atomic bomb during World War II, this characterization distorts the merits and limits of AI for warfighting. It encourages policymakers and defense officials to follow what can be called a “primrose path of AI-enabled warfare,” which is codified in the US military’s “third offset” strategy. This vision of AI-enabled warfare is fueled by gross prognostications and over-determination of emerging capabilities enhanced with some form of AI, rather than rigorous empirical analysis of its implications across all (tactical, operational, and strategic) levels of war.

No comments:

Post a Comment