BY JACK CORRIGAN
The defense research agency seeks artificial intelligence tools capable of human-like communication and logical reasoning that far surpass today’s tech.
As federal agencies ramp up efforts to advance artificial intelligence under the White House’s national AI strategy, the Pentagon’s research shop is already working to push the tech to new limits.
Last year, the Defense Advanced Research Projects Agency kicked off the AI Next campaign, a $2 billion effort to build artificial intelligence tools capable of human-like communication and logical reasoning that far surpass the abilities of today’s most advanced tech. Included in the agency’s portfolio are efforts to automate the scientific process, create computers with common sense, study the tech implications of insect brainsand link military systems to the human body.
Through the AI Exploration program, the agency is also supplying rapid bursts of funding for a myriad of high-risk, high-reward efforts to develop new AI applications.
Defense One’s sister publication Nextgov sat down with Valerie Browning, director of DARPA’s Defense Sciences Office, to discuss the government’s AI research efforts, the shortcomings of today’s tech and the growing tension between the tech industry and the Pentagon.
This conversation has been edited for length and clarity.
Nextgov: So what’s the ultimate goal of AI Next?
Browning: The grand vision for the AI Next campaign is to take machines and move them from being tools—perhaps very valuable tools—but really to be trusted, collaborative partners. There’s a certain amount of competency and world knowledge that we expect a trusted partner to possess. There’s a certain ability to recognize new situations, behave appropriately in new situations, [and] recognize when maybe you don’t have enough experience or training to actually function in a predictable or appropriate way for new situations. Those are the big picture sorts of things that we’re really after. Machine learning-enabled AI does certain tasks quite well—image classification, voice recognition, natural language processing, statistical pattern recognition—but we also know AI can fail quite spectacularly in unexpected ways. We can’t always accurately predict how they’re going to fail.
Nextgov: What are the biggest gaps between the AI today and the AI that DARPA’s trying to build?
Browning: The fact that AI can fail in ways that humans wouldn’t. In image classification, a machine will see a picture of a panda and recognize it as a panda, but you just make a few minor changes to pixels that the human eye wouldn’t even recognize, and it’s classified as a gibbon or something. We need to be able to build AI systems that have that sort of common sense wired in. We need AI systems that do have some ability for introspection, so when given a task they could communicate to their partner ‘based on my training and my experience, you should have confidence in me that I could do this’ or ‘I’ve not encountered this situation before and I can’t … perform in the way you’d like me to in this situation.’ How can we train better faster without the laborious handwork of having to label really large datasets? Wouldn’t it be nice if we didn’t have to come up with the training data of the universe to have to put into AI systems?
Nextgov: Looking at the AI Exploration program, what are the benefits to doing that kind of quick, short-term funding?
Browning: We have limited resources, and sometimes we find an area that we think may be ripe for investment but there’s some key questions that need to be answered. We really don’t want to scale up a very large [program], but we want to get some answers pretty quickly. The very act of trying to bring in a new performer through the sort of conventional acquisition cycle can be very long and tedious. [For this program], the time from posting a topic announcement to actually getting people doing work is 90 days or less, and that’s fairly unprecedented in government contracting. AIExploration allows us to go after some of the more high-risk, uncertain spaces quickly to find out whether they’re on the critical path toward reaching our ultimate vision.
Nextgov: Are there any paths right now that look more promising than others?
Browning: The physics of AI, which was one of the first ones to get started. I think we’ll know soon whether there are some real clear applications. I would say within months, not years.
Nextgov: What’re your thoughts on the White House’s National Artificial Intelligence Strategy? One of the big criticisms is the plan doesn’t provide a lot of specific guidance or funding.
Browning: I think that the right things are being prioritized. Innovation—we have to invest, we have to be a world leader. There are clear challenges in making sure we have the manpower and the human capital to make sure that we’re applying the right STEM approaches and that we are protecting that technological edge while not stifling innovation. Those are the things that I think are important and I saw all of them in there. It does mandate that all the agencies, as they develop their budgets, make this a priority, but I don’t think we know what that price tag is. So any attempt to try to say that this percentage of your budget or this top line [should go to AI], I don’t think we know that. It’s more being smart about asking the right questions and putting the right resources toward asking those questions.
Nextgov: Do you think the government right now is putting enough resources behind these efforts compared to what global competitors like China are investing?
Browning: That’s a hard comparison to make. Money can be well spent and it can be wasted. From the DARPA perspective, I think the $2 billion commitment over the five years is [appropriate]. The funding level that has been allocated for that is allowing for us to roll out programs at an appropriate rate for the community to respond. DARPA has been very transparent about our goals for the AI Next campaign—I don’t know how what [China’s] doing compares.
Nextgov: One issue that’s come up a lot recently is this perceived tension between the Pentagon and tech companies like Microsoftand Google. What role should the tech industry play in the government’s AI development efforts?
Browning: I think DARPA can take credit for helping foster a very vibrant [contractor] ecosystem. We can respect the larger companies’ positions on whether they do or do not want to work with the [Defense Department]. Their role in all this, we don’t want to understate at all, but … we can proceed with innovating with the community we have engaged. As we’ve rolled out these new programs, there’s been no shortage of good ideas coming in.
No comments:
Post a Comment