7 September 2024

The Danger of AI in War: It Doesn’t Care About Self-Preservation

Nishank Motwani

Recent wargames using artificial-intelligence models from OpenAI, Meta and Anthropic revealed a troubling trend: AI models are more likely than humans to escalate conflicts to kinetic, even nuclear, war.

This outcome highlights a fundamental difference in the nature of war between humans and AI. For humans, war is a means to impose will for survival; for AI the calculus of risk and reward is entirely different, because, as the pioneering scientist Geoffrey Hinton noted, ‘we’re biological systems, and these are digital systems.’

Regardless of how much control humans exercise over AI systems, we cannot stop the widening divergence between their behaviour and ours, because AI neural networks are moving towards autonomy and are increasingly hard to explain.

To put it bluntly, whereas human wargames and war itself entail the deliberate use of force to compel an enemy to our will, AI is not bound to the core of human instincts, self-preservation. The human desire for survival opens the door for diplomacy and conflict resolution, but whether and to what extent AI models can be trusted to handle the nuances of negotiation that align with human values is unknown.

No comments: