THE AMERICAN MILITARY is desperately trying to get a leg up in the field of artificial intelligence, which top officials are convinced will deliver victory in future warfare. But internal Pentagon documents and interviews with senior officials make clear that the Defense Department is reeling from being spurned by a tech giant and struggling to develop a plan that might work in a new sort of battle—for hearts and minds in Silicon Valley.
The battle began with an unexpected loss. In June, Google announced it was pulling out of a Pentagon program—the much-discussed Project Maven—that used the tech giant’s artificial intelligence software. Thousands of the company’s employees had signed a petition two months earlier calling for an end to its work on the project, an effort to create algorithms that could help intelligence analysts pick out military targets from video footage.
Inside the Pentagon, Google’s withdrawal brought a combination of frustration and distress—even anger—that has percolated ever since, according to five sources familiar with internal discussions on Maven, the military’s first big effort to utilize AI in warfare.
This article was produced in partnership with the Center for Public Integrity, a nonprofit, nonpartisan news organization.
“We have stumbled unprepared into a contest over the strategic narrative,” said an internal Pentagon memo circulated to roughly 50 defense officials on June 28. The memo depicted a department caught flat-footed and newly at risk of alienating experts critical to the military’s artificial intelligence development plans.
“We will not compete effectively against our adversaries if we do not win the ‘hearts and minds’ of the key supporters,” it warned.
Maven was actually far from complete and cost only about $70 million in 2017, a molecule of water in the Pentagon’s oceanic $600 billion budget that year. But Google’s announcement exemplified a larger public relations and scientific challenge the department is still wrestling with. It has responded so far by trying to create a new public image for its AI work and by seeking a review of the department’s AI policy by an advisory board of top executives from tech companies.
The reason for the Pentagon’s anxiety is clear: It wants a smooth path to use artificial intelligence in weaponry of the future, a desire already backed by the promise of several billion dollars to try to ensure such systems are trusted and accepted by military commanders, plus billions more in expenditures on the technologies themselves.
THE EXACT ROLE that AI will wind up playing in warfare remains unclear. Many weapons with AI will not involve decision-making by machine algorithms, but the potential for them to do so will exist. As a Pentagon strategy document said in August: “Technologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force.”
Developing artificial intelligence, officials say, is unlike creating other military technologies. While the military can easily turn to big defense contractors for cutting-edge work on fighter jets and bombs, the heart of innovation in AI and machine learning resides among the non-defense tech giants of Silicon Valley. Without their help, officials worry, they could lose an escalating global arms race in which AI will play an increasingly important role, something top officials say they are unwilling to accept.
“If you decide not to work on Maven, you’re not actually having a discussion on if artificial intelligence or machine learning are going to be used for military operations,” Chris Lynch, a former tech entrepreneur who now runs the Pentagon’s Defense Digital Service, said in an interview. AI is coming to warfare, he says, so the question is, which American technologists are going to engineer it?
Lynch, who recruits technical experts to spend several years working on Pentagon problems before returning to the private sector, said that AI technology is too important, and that the agency will proceed even if it has to rely on lesser experts. But without the help of the industry’s best minds, Lynch added, “we’re going to pay somebody who is far less capable to go build a far less capable product that may put young men and women in dangerous positions, and there may be mistakes because of it.”
Google isn’t likely to shift gears soon. Less than a week after announcing that the company would not seek to renew the Maven contract in June, Google released a set of AI principles which specified that the company would not use AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
Some defense officials have complained since then that Google was being unpatriotic, noting that the company was still pursuing work with the Chinese government, the top US competitor in artificial intelligence technology.
“I have a hard time with companies that are working very hard to engage in the market inside of China, and engaging in projects where intellectual property is shared with the Chinese, which is synonymous with sharing it with the Chinese military, and then don't want to work for the US military,” General Joe Dunford, chairman of the Joint Chiefs of Staff, commented while speaking at a conference in November.
In December testimony before congress, Google CEO Sundar Pichai acknowledged that Google had experimented with a program involving China, Project Dragonfly, aimed at developing a model of what government-censored search results would look like in China. However, Pichai testified that Google currently “has no plans to launch in China.”
Project Maven’s aim was to simplify work for intelligence analysts by tagging object types in video footage from drones and other platforms, helping analysts gather information and narrow their focus on potential targets, according to sources familiar with the partly classified program. But the algorithms did not select the targets or order strikes, a longtime fear of those worried about the intersection of advanced computing and new forms of lethal violence.
Many at Google nonetheless saw the program in alarming terms.
“They immediately heard drones and then they thought machine learning and automatic target recognition, and I think it escalated for them pretty quickly about enabling targeted killing, enabling targeted warfare,” said a former Google employee familiar with the internal discussions.
Google is just one of the tech giants that the Pentagon has sought to enlist in its effort to inject AI into modern warfare technology. Among the others: Microsoft and Amazon. After Google’s announcement in June more than a dozen large defense firms approached defense officials, offering to take over the work, according to current and former Pentagon officials.
But Silicon Valley activists also say the industry cannot easily ignore the ethical qualms of tech workers. “There’s a division between those who answer to shareholders, who want to get access to Defense Department contracts worth multimillions of dollars, and the rank and file who have to build the things and who feel morally complicit for things they don’t agree with,” the former Google employee said.
No comments:
Post a Comment