Morgan Dwyer
The Defense Innovation Board (DIB) recently advised the Department of Defense (DOD) to adopt ethics principles for artificial intelligence (AI): that AI should be responsible, equitable, traceable, reliable, and governable . These principles aim to keep humans in the loop during AI development and operations (responsible); avoid unintended bias (equitable); maintain sufficient understanding of AI capabilities (traceable); ensure safety, security, and robustness (reliable); and avoid unintended harm or disruption (governable). Overall, these principles are good . But as with all principles, implementation will be a challenge. This is especially the case today since, if adopted, the DIB’s proposed principles will be implemented during a tumultuous time for defense technology.
Presumably, the DIB’s principles will require meticulous development and careful oversight. In recent years, though, DOD’s standard technological processes and oversight mechanisms have been reimagined. For example, to prioritize innovation and the speed with which DOD fields new capabilities, Congress restructured the department’s primary technology oversight office and delegated most acquisition decisions to the military services. Congress also created new acquisition pathways that enable rapid prototyping and fielding by forgoing traditional oversight processes.
The DIB itself also heralded many software-specific changes through its Software Acquisition and Practices (SWAP) Study. The SWAP Study, which preceded the DIB’s focus on AI, encouraged DOD to—among other things—adopt speed as a metric to be maximized for software development. But on AI software programs, there may be an inherent tension between the DIB’s proposed principles and speed. As DOD develops AI-enabled software, it will need to work through potential trade-offs and articulate a more detailed strategy for navigating the department’s objectives.
In particular, the SWAP Study suggests replacing traditional software development processes that separate development from operations with DevOps, which blends the two. It also recommends adopting agile management philosophies that forgo strict requirements in favor of lists of desired features . Further, it espouses the benefits of sharing development and testing infrastructure, granting authority-to-operate (ATO) reciprocity, and employing automated testing. Finally, by changing how it implements software development and prioritizing speed, the SWAP Study argues that DOD will improve software security since it will be able to find and fix vulnerabilities sooner. But how will speed interact with the DIB’s proposed AI principles?
Grappling with that question is where the DIB, DOD, and the broader defense community should focus their attention next. For example, should the principles be implemented as strict requirements or—per agile philosophy—as more flexible features? How should DOD ensure traceability while simultaneously sharing software infrastructure and ATOs? Furthermore, how can DOD enable traceability without encumbering its agile software programs with unnecessary documentation? With respect to responsibility, how much and what type of oversight should be used to ensure that AI software is safe, secure, and robust? How much of that oversight process should be delegated to the lowest levels of an organization or automated to enable speed? And more fundamentally, when and how should the DIB’s principles be incorporated into the DevOps cycle?
The defense community is right to want responsible, equitable, traceable, reliable, and governable AI software that is also developed and fielded quickly. But the above questions don’t have easy answers because—as with all systems—the challenge will be implementing all objectives at the same time. Systems engineers typically manage multiple objectives by making trade-offs that prioritize some objectives at the expense of others. The next step for the defense community, therefore, is to understand what these trade-offs look like for AI software, under what circumstances DOD is willing to make trades, and who in DOD’s oversight hierarchy is empowered to adjudicate trade-off decisions. To do this, DoD should leverage ongoing and planned AI projects to address the questions outlined above.
In collaboration, the broader research community should identify and address methodological shortcomings that unnecessarily force DOD to make trade-offs. Requirements definition, as well as testing, verification, and validation, currently require some level of certainty and predictability. As the DIB highlights, DOD needs to adapt current acquisition and testing processes for AI. It remains an open question, however, how the systems engineering methods that underly these processes should evolve in order to address AI’s inherent uncertainty. Therefore, in addition to furthering the science of AI, researchers should tackle the common implementation challenges that will impede DOD’s ability to optimally operationalize and field AI-enabled systems.
Although future implementation challenges may be significant, the DIB has taken the right first step by proposing objectives for DOD. The next step—developing and implementing AI software that achieves all objectives—is a challenge that systems engineers have faced for decades. Going forward, the defense community must undertake the challenging work of understanding potential trade-offs, identifying strategies to balance competing objectives, and developing new methodologies that enable future AI software to optimally satisfy as many objectives as possible.
No comments:
Post a Comment