Pages

23 July 2024

Can China and the US Find Common Ground on Military Use of AI?

Mathew Jie Sheng Yeo and Hyeyoon Jeong

The day when artificial intelligence (AI) makes important military decisions is no longer in the distant future but is happening in the present reality. In the ongoing Israel-Hamas conflict, the AI-based targeting system “Lavender” has been used by the Israeli military to deadly effect. Utilizing AI, Lavender generates a kill list and identifies targets for the human operators to approve any strikes.

To be sure, while it can be argued that there is still human autonomy within Lavender, the human operator cannot maintain full control. Is the operator fully cognizant of the developments on the ground? Has the operator considered all options and possibilities? Was the decision influenced by automation bias? These questions are cause to doubt the full efficacy of human autonomy in a weapon system, much less in combat situations. Indeed, the Israeli operators, with only a mere 20 seconds to approve strikes, often acted as mere “rubber stamp,” relying heavily on AI’s identification with minimal review.

As evidenced by Lavender, issues such as the level of machine autonomy tolerated by humans, the risk involved, and reliability of the system present real challenges to a world that is increasingly embracing AI and the advent of new technology. Such events underscore the imperative of utilizing AI with utmost responsibility to prevent catastrophic outcomes and safeguard against potential risks.

No comments:

Post a Comment