The Pentagon is planning to field thousands of artificial intelligence-enabled autonomous vehicles by 2026 in a bid to keep pace with the Chinese military.
The plan, which has been called Replicator, will seek to "galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap and many," Deputy Secretary of Defense Kathleen Hicks said, according to a report by The Associated Press.
While the report notes few details, including how the program will be funded and how fast the Pentagon will truly be able to accelerate the development of the new vehicles, the program represents an ongoing shift in how the U.S. views the future of warfare, especially as China continues to forge ahead with AI programs of its own.
Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), believes the rapid push toward AI weapons is similar to that of a nuclear arms race.
CHINA, US RACE TO UNLEASH KILLER AI ROBOT SOLDIERS AS MILITARY POWER HANGS IN BALANCE: EXPERTS
The American and Chinese flags wave at Genting Snow Park ahead of the 2022 Winter Olympics.
"It seems the endpoint here is like nuclear weapons, where the top powers will eventually have sophisticated autonomous lethal weaponry and will have to agree that they won’t be used or, at the very least, when they are able to be used without a clear escalation," Siegel told Fox News Digital.
Replicator is just one of many AI-focused projects being developed by the Pentagon, and many experts believe it is only a matter of time until the U.S. possesses fully autonomous lethal weapons. Defense officials have continued to insist that such weapons will have a human element of control, something some experts believe is an important consideration in their development.
US MILITARY NEEDS AI VEHICLES, WEAPON SYSTEMS TO BE 'SUPERIOR' GLOBAL FORCE: EXPERTS
"Autonomous AI weapons are inevitable at this stage in the game. China is plowing ahead with them, so we must as well.," Samuel Mangold-Lenett, a staff editor at The Federalist, told Fox News Digital. "The Guardian reported this past May that in a virtual test run by the U.S. military, an Air Force drone controlled by AI went rogue. Reportedly, in this simulation, the AI opted to kill its human operator because the human would interfere with its programmed objective."
Mangold-Lenett added that the report noted that no people were harmed in the apparent simulation and that defense officials would later say that it was a "thought experiment" and not a true simulation, something he argued still showed the worth of approaching the technology cautiously.
"We need to ensure that humans remain in control of ‘autonomous’ weapons systems at all times and make sure they aren't reliant on or vulnerable to adversarial communications infrastructure like the expansive Chinese 5g network," Mangold-Lenett said.
According to The Associated Press, the Pentagon has 800 AI-related unclassified projects, many of which are still in testing. But Replicator's timeline is seen as potentially "overly ambitious," the reports notes, something that could be intended to keep rivals such as China guessing.
A Chinese navy fleet departs for Russia.
Aiden Buzzetti, president of the Bull Moose Project, argued such a development was a good thing, noting the size of the Chinese military when compared to the U.S.
"One of the major benefits of autonomous weapons for the United States is its ability to serve as a force multiplier. The Chinese military is a force to be reckoned with — it has more men, more ships and has a closer supply chain than American forces in the Pacific," Buzzetti told Fox News Digital. "If we're able to design and implement AI tools efficiently, American military forces will have better real-time information, less bureaucratic stalling and more capabilities to match with numerically superior forces."
But Buzzetti also noted the dangers of "autonomous" designs, arguing humans will not want "to completely lose control of the machines we're building."
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
"Programs that can decide for themselves who to target and kill always leaves room for error," Buzzetti said. "The major test here will be to create something reliable enough to be effective in a military role without the potential for making mistakes that injure our own service members or civilians."
Despite the appearance of a new and dangerous arms race, Pioneer Development Group Chief Analytics Officer Christopher Alexander stressed that current AI tools designed for defense have largely focused on "augmenting human beings who are doing routine administrative or analytical tasks."
The XQ-58A Valkyrie demonstrates the separation of the ALTIUS-600 small unmanned aircraft system in a test at the U.S. Army Yuma Proving Ground test range in Arizona, March 26, 2021. This test was the first time the weapons bay doors were opened in flight.
"There are very few current programs that involve lethal weapon systems, and there is always a human in the loop making the moral decision," Alexander told Fox News Digital. "AI’s key ability to support DOD stems from how it improves decision-making. From reducing the work needed under time constraints to having more clarity as the AI uses more data to reduce the fog of war, AI allows faster, clearer decisions that can end conflicts faster and with fewer civilian casualties."
No comments:
Post a Comment