Pages

19 September 2021

US Is Only Nation with Ethical Standards for AI Weapons. Should We Be Afraid?

DAVID H. FREEDMAN 

On August 29th, three days after a suicide bomber killed 13 American soldiers and 160 civilians at Kabul airport, U.S. military intelligence was tracking what was thought to be another potentially devastating attack: a car driving towards the airport carrying "packages" that looked suspiciously like explosives. The plan was to lock in on the car by video with one of the Army's Reaper drones and destroy it with a Hellfire missile at a moment when there were no innocent civilians nearby. Sure enough, the car came to a stop at a quiet spot.

The tactical commander, most likely working at Creech Air Force Base in Nevada, had received the green light from General Kenneth F. McKenzie Jr., the head of U.S. Central Command in Tampa, Florida. Since video feeds have to ricochet among military commanders spread out around the world, they are often delayed by several seconds. In this case, that lag may have been time enough for a handful of civilians to approach the target vehicle, according to the U.S. military. The blast killed as many as 10 Afghan civilians, including seven children, and raised an international outcry. Doubts have surfaced over whether the car even posed a threat in the first place.

As military strategists ponder how to prevent future threats from ISIS, al Qaeda and other groups that could arise in Taliban-controlled Afghanistan—or any other distant location, for that matter—they are searching for a better way of attacking from afar. That search is leading in a disturbing direction: letting the machines decide when, and perhaps whom, to kill.

The autonomous system, Origin, prepares for a practice run during the Project Convergence capstone event at Yuma Proving Ground, Arizona, Aug. 11 – Sept. 18, 2020.SPC. CARLOS CUEBAS FANTAUZZI/U.S. ARMY

In coming years, Reapers and other U.S. drones will be equipped with advanced artificial intelligence technology. That raises a startling scenario: Military drones squirreled away in tiny, unmanned bases in or near Afghanistan, ready to take off, scan the territory, instantly analyze the images they take in, identify and target terrorist activity, ensure the target is clear of civilians, fire a missile, confirm the kill and return to base—all with little or no human intervention.

The motivation to equip Reaper drones with artificial intelligence (AI) is not primarily humanitarian, of course. The true purpose of AI weaponry is to achieve overwhelming military advantage—and in this respect, AI is highly promising. At a time when the U.S. has pulled its troops from Afghanistan and is reluctant to commit them to other conflicts around the world, the ability to attack from a distance with unmanned weapons is becoming a key element of U.S. military strategy. Artificial intelligence, by endowing machines with the ability to make battlefield decisions on their own, makes this strategy viable.

Integrating AI technology into weapons systems opens the door to making them smaller and cheaper than manned versions and capable of reacting faster and hitting targets more accurately, without risking the lives of soldiers. Plans are being laid to include AI not only in autonomous Reapers but a whole arsenal of weaponry, ranging from fighter jets to submarines to missiles, which will be able to strike at terrorists and enemy forces entirely under their own control–humans, optional.

Relatives and neighbors of the Ahmadi family gathered around the incinerated husk of a vehicle targeted and hit earlier Sunday afternoon by an American drone strike, in Kabul, Afghanistan, Monday, Aug. 30, 2021.MARCUS YAM/LOS ANGELES TIMES/GETTY

Nations aren't in the habit of showcasing their most advanced technology. Judging from what's come to light in various reports, AI-equipped weapons are coming online fast. Progress (if you can call it that) toward ever-more capable autonomous military machines has accelerated in recent years, thanks both to the huge strides in the field of AI and enormous investments by Russia, China, the U.S. and other countries eager to get an AI-powered edge in military might—or at least to not fall too far behind their rivals.

Russia has robotic tanks and missiles that can pick their own targets. China has unmanned mobile rocket launchers and submarines and other AI weapons under development. Turkey, Israel and Iran are pursuing AI weapons. The U.S., meanwhile, has already deployed autonomous sub-hunting ships and tank-seeking missiles—and much more is in the works. The Pentagon is currently spending more than $1 billion a year on AI—and that includes only spending included in publicly released budgets. About 10 percent of the Pentagon's budget is cloaked in secrecy, and hundreds of billions more are buried in the budgets of other agencies.

Scientists, policy analysts and human rights advocates have raised concerns about the coming AI arsenals. Some say such weapons are vulnerable to errors and hackers that could threaten innocent people. Others worry that letting machines initiate deadly attacks on their own is unethical and poses an unacceptable moral risk. Still others fear that the rise of AI weapons gives rogue nations and terrorist organizations the ability to punch above their weight, shaking up the global balance of power, leading to more confrontations (potentially involving nuclear weapons) and wars.

These objections have done nothing to slow the AI arms race. U.S. military leaders seem less concerned with such drawbacks than with keeping up with China and Russia. "AI in warfare is already happening," says Robert Work, a former U.S. deputy secretary of defense, and co-chair of the National Security Commission on AI. "All the major global military competitors are exploring what more can be done with it, including the U.S."

Regardless of who wins the race, the contours of military force—who has it and how they use it—are about to change radically.

Airman German Kudel, 386th Expeditionary Aircraft Maintenance Squadron avionics specialist, conducts routine pre-flight program checks at Ali Al Salem Air Base, Kuwait June 10, 2020.SGT. ALEXANDRE MONTES/U.S. AIR FORCE
The Leap to AI

Missile-equipped drones have been a mainstay of U.S. anti-terrorist and other military combat for two decades, but they leave much collateral damage—between 900 and 2,200 civilians have been killed in U.S. drone strikes over the past twenty years, 300 or more of them children, according to the London-based Bureau of Investigative Journalism. They're also prone to delays in video transmission that almost certainly have led to missed opportunities because a brief window of opportunity closed before a team could give the remote pilot a green light.

An AI-equipped drone, by contrast, could spot, validate and fire at a target in a few hundredths of a second, greatly expanding the military's ability to strike from afar. That capability could enable more U.S. strikes, anywhere in the world, such as the assassination of Iranian general Qasem Soleimani in January 2020 while visiting Iraq. It could also give the U.S. more effective means to execute surgical but deadly responses to affronts such as the Syrian government's past chemical-weapon attacks on its own people—without requiring sending a single U.S. soldier into the country.

Boosting the accuracy and timing of drone strikes could also reduce the U.S. military's heavy reliance on conventional aircraft strikes. According to the British independent monitoring group Airwars, those U.S. strikes have cost the lives of as many as 50,000 civilians since 2001 and put human pilots, and their $100-million aircraft, at risk.

Weapons that seek out targets without human control are not entirely new. In World War II, the U.S. fielded torpedoes that could listen for German U-boats and pursue them. Since then, militaries have deployed hundreds of different types of guns, guided missiles and drones capable of aiming themselves and locking in on targets.

What's different about AI-driven weapons systems is the nature and power of the weapon's decision-making software. Until recently, any computer programs that were baked into a weapon's control system had to be written by human programmers, providing step-by-step directions for accomplishing simple, narrow tasks in specific situations. Today, AI software enlists "machine learning" algorithms that actually write their own code after being exposed to thousands of examples of what a successfully completed task looks like, be it recognizing an enemy tank or keeping a self-driving vehicle away from trees.

A U.S. Air Force MQ-9 Reaper assigned to the 556th Test and Evaluation Squadron armed with an AIM-9X Block 2 missile sits on the ramp at Creech Air Force Base, Nevada, Sept. 3, 2020.SENIOR AIRMAN HALEY STEVENS/U.S. AIR FORCE

The resulting code looks nothing like conventional computer programming, but its capabilities go far beyond it. Conventional autonomous weapons have to either be placed near or pointed at isolated or easily recognizable enemy targets, lest they lock in on the wrong object. But AI weapons can in principle simply be turned loose to watch out for or hunt down almost any type of target, deciding on their own which to attack and when. They can track targets that might vary in appearance or behavior, switch targets and navigate unfamiliar terrain in bad weather, while recognizing and avoiding friendly troops or civilians.

The potential military advantages are enormous. "AI can reduce the cost and risk of any mission, and get up close to the adversary while keeping war- fighters out of harm's way," says Tim Barton, chief technology officer at the Dynetics Group, which is developing unmanned air systems for the U.S. Department of Defense. "And they can take in and go through information at light speed. Humans can't do it fast enough anymore."

Robotic killing isn't some futuristic possibility: it is already here. That Rubicon was crossed last year in Libya, when an AI-equipped combat drone operating entirely outside of human control killed militia rebels fighting government soldiers. Essentially a real-life version of the "hunter-killer" drones depicted in the Terminator 3 film, the Turkish-made Kargu-2 "lethal autonomous weapons system" flew over the battlefield, recognized the fleeing rebels, dove at them and set off an explosive charge, according to a March United Nations report that became public in May.

Russia, China, Iran and Turkey have all demonstrated AI weapons. So far, the deadly Libyan attack is the only public instance of such weapons on the battlefield. Still, there's plenty of evidence it's likely to happen more frequently in the future.

Russia's military has been open about working furiously to take advantage of AI capabilities. According to Russia's state press agency Tass, the country is developing an AI-guided missile that can pick its target in mid-flight; a self-targeting machine gun; autonomous vehicles for land, air, sea and underwater surveillance and combat; a robotic tank bristling with guns, missiles and flamethrowers; and AI-based radar stations, among other projects. Russia's Military Industrial Committee, the country's top military decision-making body, has declared its intention to turn nearly a third of its firepower over to AI by 2030.

Team members of the 'Underwater Defence' (SAS), the special operation unit of the Turkish Navy take part in a military training using drone, unmanned aerial vehicles (UAV) for exploring at Beykoz district of Istanbul, Turkey on September 13, 2018.MUHAMMED ENES YILDIRIM/ANADOLU AGENCY/GETTY

China has been more circumspect about details, probably to minimize concern over its many business dealings with AI companies in Silicon Valley and elsewhere in the U.S. and around the world. But few observers doubt that the country's vast investment in AI science and technology will spill over into weapons. Some of China's top military and defense-industry leaders have publicly said as much, predicting that lethal "intelligentized" weapons will be common by 2025, and will soon help close the gap between China's military and those of the U.S., Europe and Russia.

Iran has demonstrated fully autonomous suicide drones, and its generals have promised to have them and possibly other AI weapons under development, including missiles and robots, ready for deployment by 2024. Iran has already unleashed drone strikes on Saudi Arabia, Israel, and U.S. forces in Iraq, and crippled an Israeli-owned oil tanker, killing two crew members, in a drone attack off the coast of Oman in late July. There's no evidence any of the strikes were aided by AI, but few experts doubt Iran will enlist AI in future attacks as soon as it's able, likely before polishing the technology and building in safeguards.

U.S. allies are also jumping into the fray. The U.K. has deployed small self-targeting missiles, tested an autonomous-vehicle-mounted machine gun and demonstrated AI-controlled missile-defense systems on its ships. Israel, meanwhile, continues to beef up its heavily employed and highly effective "Iron Dome" Patriot air-defense missile system with more and more AI-aided capabilities. So capable is the technology that the U.S. Army has installed Israeli Iron Dome batteries for border defense in New Mexico and Texas.

The U.S. hasn't stood still, of course. In 2018 the Pentagon formed the Joint Artificial Intelligence Center to spur and coordinate AI development and integration throughout the military. One big reason is Russia and China's ongoing development of "hypersonic" cruise missiles that can travel at more than five times the speed of sound. At those speeds, humans may not be able to react quickly enough to initiate defensive measures or launch counter strike missiles in a "use it or lose it" situation. Speaking at a 2019 conference of defense experts, U.S. Missile Defense Agency Director Vice Admiral Jon Hill put it this way: "With the kind of speeds that we're dealing with today, that kind of reaction time that we have to have today, there's no other answer other than to leverage artificial intelligence."

U.S. Marines with 1st Battalion, 2d Marine Regiment (1/2), 2d Marine Division, operate a Expeditionary Modular Autonomous Vehicle (EMAV) during a training event on Camp Lejeune, N.C., June 24, 2021.LANCE CPL. EMMA L. GRAY/U.S. MARINE CORPS

The Pentagon has several programs under way. One involves guided, jet-powered cannon shells that can be fired in the general direction of the enemy in order to seek out targets while avoiding allies. The Navy, meanwhile, has taken delivery of two autonomous ships for a variety of missions, including finding enemy submarines, and is developing unmanned submarines. And in December the Air Force demonstrated turning the navigation and radar systems of a U-2 spy plane over to AI control.

On August 3, even as the Taliban was beginning to seize control of Afghanistan on the heels of departing American forces, Colonel Mike Jiru, a Materiel Command program executive officer for the Air Force, told Air Force Magazine that the military is planning a number of upgrades to the Reaper, the U.S.'s workhorse military drone. The upgrades include the ability to take off and land autonomously, and the addition of powerful computers specifically intended to run artificial intelligence software.

"We're on a pathway where leaders don't fundamentally question whether we should militarize AI," says Ingvild Bode, an associate professor with the Centre for War Studies at the University of Southern Denmark.
Here Come the Drone Swarms

Small autonomous drones are likely to have the most immediate impact. That's because they're relatively cheap and easy to produce in big numbers, don't require a lot of support or infrastructure, and aren't likely to wreak massive havoc if something goes wrong. Most important, thanks to AI they're capable of providing a massive advantage in almost any type of conflict or engagement, including reprisals against terrorists, asymmetric warfare or all-out conflict between nations.

A single small autonomous drone can fly off to scout out terrorist or other enemy positions and beam back invaluable images and other data, often without being spotted. Like the Kargu-2, it can drop an explosive payload on enemy targets. Such offensive drones can serve as "loitering munitions," simply flying around a battlefield or terrorist territory until their AI identifies an appropriate target and goes in for the kill. Larger AI-equipped autonomous drones, such as Israel's Harpy, can find a radar station or other substantial target on its own and fire off a small missile to destroy it. Virtually every country with a large military is exploring AI-enabled drone weaponry.

The real game-changer will be arrays, or swarms, of autonomous drones that can blanket an area with enough cameras or other types of sensors to spot and analyze almost any enemy activity. Coordinating the flights of an entire network of drones is beyond the capabilities of human controllers, but perfectly doable with AI.

Sea Hunter, an entirely new class of unmanned sea surface vehicle developed in partnership between the Office of Naval Research (ONR) and the Defense Advanced Research Projects Agency (DARPA), recently completed an autonomous sail from San Diego to Hawaii and back—the first ship ever to do so autonomously.U.S. NAVY

Keeping the swarm coordinated isn't even the hardest part. The bigger challenge is making use of the vast stream of images and other data they send back. "The real value of AI is gathering and integrating the data coming in from large quantities of sensors, and eliminating the information that isn't going to be of interest to military operators," says Chris Brose, chief strategy officer for Anduril, an Irvine, California, company that makes AI- and drone-based systems, among other AI-based defense technologies.

The Pentagon's Project Maven, a four-year-old program, aims to use AI to spot and track enemy activity from video feeds. Google was contributing its own extensive AI development resources to the project until employees pressured the company in 2018 to withdraw over concerns about militarizing AI. (The Department of Defense has denied that Project Maven is focused on military applications, but that claim is widely discounted.)

Beyond merely spotting enemy activity, the next step is to apply AI to "battlefield management"—that is, to cut through the fog of war and help military commanders understand everything going on in a combat situation, and then decide what to do about it. That might include moving troops, selecting targets and bringing in air support and reinforcements based on up-to-the-second information streaming in from drone swarms, satellites, and a range of sensors in and around the combat.

"There are so many things vying for the attention of the soldier in warfare," says Mike Heibel, program director for Northrop Grumman's air defense team, which is working on battlefield-management AI for the U.S. military. "AI has the capabilities to pick out threats and send 3D information to a cannon." Northrop Grumman has already demonstrated a mobile system that does exactly that.

Members of the Islamic Revolutionary Guard Corps conduct a military drill with ballistic missiles and unmanned air vehicles at Great Salt Desert, in the middle of the Iranian Plateau, on January 15, 2021 in Iran.SEPAHNEWS/ANADOLU AGENCY/GETTY

U.S. work on AI-enhanced battlefield management is advancing on several fronts. The U.S. National Geospatial Intelligence Agency claims it has already turned AI loose on 12 million satellite images in order to spot an enemy missile launch. The Army has experimentally fielded an AI-based system called Prometheus that extracts enemy activity from real-time imaging, determines on its own which of the activities meet commanders' criteria for high-priority targets and feeds those positions to artillery weapons to automatically aim them.
The Black Box Problem

The more the military embraces AI, the louder the chorus of objections from experts and advocates. One big concern is that AI-guided weapons will mistakenly target civilians or friendly forces or cause more unnecessary casualties than human operators would.

Such concerns are well founded. AI systems can in theory be hacked by outsiders, just as any software can. The safeguards may be more robust than commercial systems, but the stakes are much higher when the results of a cyber breach are powerful weapons gone wild. In 2011 several U.S. drones deployed in the Middle East were infected with malicious viruses—a warning that software-reliant weapons are vulnerable.

Protesters hold up an image of Qassem Soleimani, an Iranian commander, during a demonstration following the U.S. airstrike in Iraq which killed him, in Tehran, Iran, on Friday, Jan. 3, 2020.ALI MOHAMMADI/BLOOMBERG/GETTY

Even if the military can keep its AI systems safe from hackers, it may still not be able to ensure that AI software always behaves as intended. That's due to what's known as the "black box" problem: Because machine-learning algorithms write their own complex, hard-to-analyze code, human software experts can't always predict what an AI system will do in unexpected situations. Testing reduces but doesn't eliminate the chances of ugly surprises—it can't cover the essentially infinite number of unique conditions that an AI-controlled weapon might confront in the chaos of conflict.

Self-driving cars, which are controlled by AI programs roughly similar to those employed in military applications, provide a useful analog. In 2018, a driverless Uber hit and killed a pedestrian in Tempe, Arizona. The pedestrian had been walking a bike across the road outside a crosswalk—a scenario that had simply never come up in testing. "AI can get it wrong in ways that are entirely alien to humans," says the University of Southern Denmark's Bode. "We can't test the ability of a system to differentiate between civilians and combatants in all situations."

It gets worse. An enemy can take advantage of known weaknesses in AI systems. They could alter the appearance of uniforms, buildings and weapons or change their behavior in ways that trip up the algorithms. Driverless cars have been purposely fooled into errors by stickers placed on traffic signs, phony road markings and lights shined onto their sensors. "Can you make an airliner full of passengers look like an enemy target and cause an AI weapons system to behave badly?" Dynetics' Barton asks. In combat, he adds the stakes for getting it right are far higher. "We have to bake in that protection from the beginning, not bolt it on later."

Even if military AI systems work exactly as intended, is it ethical to give machines the authority to destroy and kill? Work, the former defense deputy secretary, insists the U.S. military is strictly committed to keeping a human decision-maker in the "kill chain" so that no weapon will pick a target and fire on its own without an OK. But other nations may not be as careful, he says. "As far as we know, the U.S. military is the only one that has established ethical principles for AI."

Twenty-two nations have asked the United Nations to ban automated weapons capable of operating outside human oversight, but so far no agreements have been signed. Human Rights Watch and other advocacy groups have called for similar bans to no avail. If Russia, China and others give AI weapons the authority to choose targets, the U.S. may face a choice: go along or operate at a military disadvantage.

This picture taken 26 December 2011 shows the Pentagon building in Washington, DC.AFP/GETTY

That sets up a race-to-the-bottom in which the least ethical or most careless adversary—one that is most aggressive about fielding AI-enabled weaponry, regardless of reliability and safeguards—forces others to follow suit. Nuclear weapons could be placed under the control of flawed AI systems that watch for signs that someone else's AI nukes are about to launch. AI is "increasing the risk of inadvertent or accidental escalation caused by misperception or miscalculation," says James Johnson, a foreign-policy researcher at Ireland's Dublin City University and author of Artificial Intelligence and the Future of Warfare. (Manchester University Press, September 2021).

Both the U.S. and Russia have repeatedly refused to allow the United Nations' Convention on Certain Conventional Weapons (CCW), the main international body for weapons agreements, to ban lethal AI-controlled weapons. Meetings to discuss revisiting the CCW are planned for December, but there's little optimism an agreement will be reached; among the most powerful nations, only China has expressed support for such a treaty. NATO nations have discussed the possibility of an agreement, but nothing definite has emerged. If the U.S. is negotiating AI weapons separately with other countries, there's little public word of it.

Even if diplomatic efforts led to limits on the use of AI, verifying adherence would be far more difficult than, say, inspecting nuclear missile silos. Military leaders in a hostile, competitive world are not known for their ability to resist advanced weaponry, regardless of consequences.

No comments:

Post a Comment