Pages

9 January 2020

The Department of Defense Posture for Artificial Intelligence

by Danielle C. Tarraf,

What is the state of AI relevant to the DoD?

What is the DoD’s current posture in AI?

What internal actions, external engagements, and potential legislative or regulatory actions might enhance the DoD’s posture in AI?

The 2019 National Defense Authorization Act mandated a study on artificial intelligence (AI) topics. In this report, RAND Corporation researchers assess the state of AI relevant to the U.S. Department of Defense (DoD), and address misconceptions about AI; they carry out an independent and introspective assessment of the Department of Defense's posture for AI; and they share a set of recommendations for internal actions, external engagements, and potential legislative or regulatory actions to enhance the Department of Defense's posture in AI.

Key Findings


The state of AI and its implications for DoD

Many technologies underpin AI. Recent significant technological advances have been primarily in supervised machine learning, specifically deep learning. Success in deep learning is predicated on the availability of large labeled data sets and significant computing power to train the models. The current crop of approaches are fragile, artisanal, and are optimized for commercial rather than DoD uses.

The current state of AI verification, validation, test, and evaluation (VVT&E) is nowhere close to ensuring the performance and safety of AI applications, particularly where safety-critical systems are concerned. Although this is not a problem unique to DoD, it is one that significantly affects DoD.

It is important for DoD to maintain realistic expectations for both performance and timelines in going from demonstrations of the art of the possible to deployments at scale. As a rule of thumb, investments made today can be expected to yield at scale deployment in the near-term for enterprise AI, mid-term for most mission support AI, and long-term for most operational AI.

DoD's posture in AI is significantly challenged across all dimensions of the authors' assessment

DoD lacks baselines and metrics in conjunction with its AI vision.

The Joint Artificial Intelligence Center (JAIC) lacks visibility and authorities to carry out its present role. It also lacks a five-year strategic road map, and a precise objective allowing it to formulate one.

The lack of longer-term budget commitments might hinder the JAIC's success.

The service AI annexes lack baselines and metrics. When mandates and authorities of the service AI organizations exist, they appear to be limited.

Communication channels between the builders and users of AI within DoD are sparse.

Innovation within DoD might be hampered by current practices and processes or their implementation. Current practices and processes also inhibit the ability to bring in external innovation.

Data are not collected and stored at every opportunity, and access to existing data is limited; the data that exist are not always understandable or traceable. Lack of interoperability in systems across DoD also creates challenges. There is ambiguity in data ownership where external vendors are involved.

DoD lacks clear mechanisms for defining and tracking AI talent, and it struggles to grow and cultivate AI talent.

Recommendations

DoD should adapt AI governance structures that align authorities and resources with their mission of scaling AI.

The JAIC and each centralized service organization should develop a five-year strategic road maps to scale AI.

The JAIC, working with DoD partners, should carry out annual or biannual portfolio reviews of DoD-wide investments in AI.

The JAIC should organize a technical workshop showcasing AI programs DoD-wide.

DoD should advance the science and practice of VVT&E of AI systems, working in close partnership with industry and academia.

All funded AI efforts should include a budget for AI VVT&E, including any critically needed testing infrastructure.

All agencies within DoD should create or strengthen mechanisms for connecting AI researchers, technology developers, and operators.

DoD should recognize data as critical resources, continue instituting practices for their collection and curation, and increase sharing while resolving issues in protecting the data after sharing and during analysis and use.

The chief data officer should make a selection of DoD data sets available to the AI community to spur innovation and enhance external engagement with DoD.
DoD should embrace permeability, and an appropriate level of openness, as a means of enhancing DoD's access to AI talent.

No comments:

Post a Comment