LAST THURSDAY, THE US State Department outlined a new vision for developing, testing, and verifying military systems—including weapons—that make use of AI.
The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the US to guide the development of military AI at a crucial time for the technology. The document does not legally bind the US military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly.
Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons.
When it comes to autonomous weapons systems, US military leaders have often reassured that a human will remain “in the loop” for decisions about use of deadly force. But the official policy, first issued by the DOD in 2012 and updated this year, does not require this to be the case.
Attempts to forge an international ban on autonomous weapons have so far come to naught. The International Red Cross and campaign groups like Stop Killer Robots have pushed for an agreement at the United Nations, but some major powers—the US, Russia, Israel, South Korea, and Australia—have proven unwilling to commit.
One reason is that many within the Pentagon see increased use of AI across the military, including outside of non-weapons systems, as vital—and inevitable. They argue that a ban would slow US progress and handicap its technology relative to adversaries such as China and Russia. The war in Ukraine has shown how rapidly autonomy in the form of cheap, disposable drones, which are becoming more capable thanks to machine learning algorithms that help them perceive and act, can help provide an edge in a conflict.
Earlier this month, I wrote about onetime Google CEO Eric Schmidt’s personal mission to amp up Pentagon AI to ensure the US does not fall behind China. It was just one story to emerge from months spent reporting on efforts to adopt AI in critical military systems, and how that is becoming central to US military strategy—even if many of the technologies involved remain nascent and untested in any crisis.
Lauren Kahn, a research fellow at the Council on Foreign Relations, welcomed the new US declaration as a potential building block for more responsible use of military AI around the world.
A few nations already have weapons that operate without direct human control in limited circumstances, such as missile defenses that need to respond at superhuman speed to be effective. Greater use of AI might mean more scenarios where systems act autonomously, for example when drones are operating out of communications range or in swarms too complex for any human to manage.
Some proclamations around the need for AI in weapons, especially from companies developing the technology, still seem a little farfetched. There have been reports of fully autonomous weapons being used in recent conflicts and of AI assisting in targeted military strikes, but these have not been verified, and in truth many soldiers may be wary of systems that rely on algorithms that are far from infallible.
And yet if autonomous weapons cannot be banned, then their development will continue. That will make it vital to ensure that the AI involved behave as expected—even if the engineering required to fully enact intentions like those in the new US declaration is yet to be perfected.
No comments:
Post a Comment