By Cade Metz
MOUNTAIN VIEW, Calif. — In a May memo to President Trump, Defense Secretary Jim Mattis implored him to create a national strategy for artificial intelligence. Mr. Mattis argued that the United States was not keeping pace with the ambitious plans of China and other countries. With a final flourish, he quoted a recent magazine article by Henry A. Kissinger, the former secretary of state, and called for a presidential commission capable of “inspiring a whole of country effort that will ensure the U.S. is a leader not just in matters of defense but in the broader ‘transformation of the human condition.’” Mr. Mattis included a copy of Mr. Kissinger’s article with his four-paragraph note.
Mr. Mattis’s memo, which has not been reported before and was viewed by The New York Times, reflected a growing sense of urgency among defense officials about artificial intelligence. The consultants and planners who try to forecast threats think A.I. could be the next technological game changer in warfare.
The Chinese government has raised the stakes with its own national strategy. Academic and commercial organizations in China have been open about working closely with the military on A.I. projects. They call it “military-civil fusion.”
It is not clear what impact, if any, Mr. Mattis’s memo had. Though the White House announced in May — about three weeks before he sent his note — that it would establish a panel of government officials to study A.I. issues, critics say the administration still has not done enough to set federal policy. Officials with the Office of Science and Technology Policy, which would most likely take a leadership role in setting an agenda for A.I., said that A.I. is a national research and development priority and that it is part of the president’s national security and defense strategies.
Nonetheless, the Pentagon appears to be pushing ahead on its own, looking for ways to strengthen its ties with A.I. researchers, particularly in Silicon Valley, where there is considerable wariness about working with the military and intelligence agencies.
In late June, the Pentagon announced the creation of the Joint Artificial Intelligence Center, or JAIC. Defense officials have not said how many people will be dedicated to the new program or where it will be based when it starts next month. It could have several offices around the country.
The Defense Department wants to shift $75 million of its annual budget into the new office and a total of $1.7 billion over five years, according to a person familiar with the matter who was not allowed to speak about it publicly.
Known as “the Jake,” the center is billed as a way of facilitating dozens of A.I. projects across the Defense Department. This includes Project Maven, an effort to build technology to identify people and things in video captured by drones that has come to symbolize the ideological gap between the government and Silicon Valley.
Around the time Mr. Mattis wrote his memo to Mr. Trump, thousands of Google employees were protesting their company’s involvement in Project Maven. After the protests became public, Google withdrew from the project.
The protests might have been a surprise to Pentagon officials, since big tech companies have been defense contractors for as long as there has been a Silicon Valley. And there is some irony in any industry reluctance to work with the military on A.I., given that research competitions sponsored by an arm of the Defense Department, called Darpa, jump-started work on the technology that goes into the autonomous vehicles many tech companies are now trying to commercialize.
But in the eyes of some researchers, creating robotic vehicles and developing robotic weapons are very different. And they fear that autonomous weapons pose an unusual threat to humans.
“This is a unique moment, with so much activism coming out of Silicon Valley,” said Elsa Kania, an adjunct fellow at the Center for a New American Security, a think tank that explores policy related to national security and defense. “Some of it is informed by the political situation, but it also reflects deep concern over the militarization of these technologies as well as their application to surveillance.”
The Joint Artificial Intelligence Center, officials hope, will help close that gap.
“One of our greatest national strengths is the innovation and talent found in our private sector and academic institutions, enabled by free and open society,” Brendan McCord, a former Navy submarine officer and an A.I. start-up veteran who will lead the center, said during a public meeting in Silicon Valley last month. “The JAIC will help evolve our partnerships with industry, academia, allies.”
The center, he added, will work with “traditional and nontraditional innovators alike,” meaning longtime government contractors like Lockheed Martin as well as newer Silicon Valley companies. The Pentagon has worked with more than 20 companies on Project Maven so far, but it hopes to expand this work and overcome the reluctance among workers.
This summer, a Pentagon researcher worked alongside a small but influential Silicon Valley artificial intelligence lab, Fast.ai, on a public effort to build technology capable of accelerating the development of A.I. systems.
Autonomous systems are based on algorithms that can learn to do things like recognize objects by analyzing vast amounts of data. The Fast.ai project would improve the speed of that A.I. “training.”
The Pentagon is also offering an olive branch to its Silicon Valley critics. While unveiling the JAIC, Mr. McCord said its focus would include “ethics, humanitarian considerations, and both short-term and long-term A.I. safety.”
It was an important step toward reaching détente with A.I. researchers, said Sophie-Charlotte Fischer, a researcher at Center of Security Studies at ETH Zurich University in Switzerland who specializes in the relationship between the tech industry and government. “There needs to be a clear understanding of what it means to develop and deploy these A.I. technologies,” she said.
Will it be enough? Skeptics want to see the details. “So far, the plans remain very abstract,” Ms. Fischer said. “What kind of systems do they want to allow? Do they want to attach weapons systems to A.I.?”
Robert Work, the former deputy secretary of defense who founded Project Maven, worries that the Google protest has skewed the perception of the project, which does not yet involve lethal weapons, and stunted public discussion of how military technology should evolve.
“We need to have an open debate about A.I. and its consequences and hear arguments from all sides,” he said in a recent interview.
No comments:
Post a Comment