Pages

24 August 2024

Artificial intelligence at war

Peter Layton

There’s a global arms race under way to work out how best to use artificial intelligence for military purposes. The Gaza and Ukraine wars are now accelerating this. These conflicts might inform Australia and others in the region as they prepare for a possible AI-fuelled ‘hyperwar’ closer to home, given that China envisages fighting wars using automated decision-making under the rubric of what it calls ‘intelligentization’.

The Gaza war has shown that the use of AI in tactical targeting can drive military strategy by encouraging decision-making bias. At the start of the conflict, an Israeli Defence Force AI system called Lavender apparently identified 37,000 people linked to Hamas. Its function quickly shifted from gathering long-term intelligence to rapidly identifying individual operatives to target. Foot soldiers were easier to swiftly locate and attack than senior commanders, so they dominated the attack schedule.

Lavender created a simplified digital model of the battlefield, allowing dramatically faster targeting and much higher rates of attacks than in earlier conflicts. Human analysts did review Lavender’s recommendations before authorising attacks, but they quickly grew to trust it, considering it more reliable. Humans often spent only 20 seconds considering Lavender’s target recommendations before approving them.

No comments:

Post a Comment