WILL KNIGHT
After Palmer Luckey founded Anduril in 2017, he promised it would be a new kind of defense contractor, inspired by hacker ingenuity and Silicon Valley speed.
The company’s latest product, a jet-powered, AI-controlled combat drone called Roadrunner, is inspired by the grim reality of modern conflict, especially in Ukraine, where large numbers of cheap, agile suicide drones have proven highly deadly over the past year.
“The problem we saw emerging was this very low-cost, very high-quantity, increasingly sophisticated and advanced aerial threat,” says Christian Brose, chief strategy officer at Anduril.
This kind of aerial threat has come to define the conflict in Ukraine, where Ukrainian and Russian forces are locked in an arms race involving large numbers of cheap drones capable of loitering autonomously before attacking a target by delivering an explosive payload. These systems, which include US-made Switchblades on the Ukrainian side, can evade jamming and ground defenses and may need to be shot down by either a fighter jet or a missile that costs many times more to use.
Roadrunner is a modular, twin-jet aircraft roughly the size of a patio heater that can operate at high (subsonic) speeds, can take off and land vertically, and can return to base if it isn’t needed, according to Anduril. The version designed to target drones or even missiles can loiter autonomously looking for threats.
Brose says the system can already operate with a high degree of autonomy, and it is designed so that the software can be upgraded with new capabilities. But the system requires a human operator to make decisions on the use of deadly force. “Our driving belief is that there has to be human agency for identifying and classifying a threat, and there has to be human accountability for any action that gets taken against that threat,” he says.
Samuel Bendett, an expert on the military use of drones at the Center for New American Security, a think tank, says Roadrunner could be used in Ukraine to intercept Iranian-made Shahed drones, which have become an effective way for Russian forces to target stationary Ukrainian targets.
Bendett says both Russian and Ukrainian forces are now using drones in a complete “kill chain,” with disposable consumer drones being used for target acquisition and then either short- or long-range suicide drones being used to attack. “There is a lot of experimentation taking place in Ukraine, on both sides,” Bendett says. “And I’m assuming that a lot of US [military] innovations are going to be built with Ukraine in mind.”
This experimentation has included use of naval drones as well as artificial intelligence for targeting and drone control. Last month, New Scientist reported that Ukrainian forces may be using a drone that employs AI to target and attack human targets without human control—a “lethal autonomous weapon”—but Bendett says he hasn’t been able to confirm this.
The war in Ukraine, the rising importance of AI and autonomy, and the way that consumer technology has become relevant to military operations has prompted many nations to rethink their military strategies and funding.
Several years ago, the Pentagon recognized AI as a potentially game-changing military technology, and it has sought to embrace the technology in recent years as it aims to counter the threat of an increasingly capable Chinese military.
In an effort to sidestep a procurement system that favors expensive and exquisite systems that take many years to develop, the US Department of Defense has launched several initiatives aimed at experimenting with low-cost, rapidly developed AI-powered systems incorporating technology from nontraditional defense firms.
In September, the Pentagon announced the Replicator Initiative, with the mission of fielding “autonomous systems at scale of multiple thousands, in multiple domains, within the next 18-to-24 months,” to counter China’s conventional military advantage. The Pentagon has yet to pick the contractors that will be involved with the program.
While militaries race to adopt new technologies incorporating more AI, there are concerns that the shift could prove destabilizing. The US and 30 other nations announced a declaration earlier this month that calls for guardrails on the use of military AI. It does not call for a prohibition on the development and use of lethal autonomous weapons, but it recommends rules around engineering principles and transparency to prevent unintended escalation in conflict.
No comments:
Post a Comment