Margarita Konaev, Tina Huang, Husanjot Chahal
As the U.S. military integrates artificial intelligence into its systems and missions, there are outstanding questions about the role of trust in human-machine teams. This report examines the drivers and effects of such trust, assesses the risks from too much or too little trust in intelligent technologies, reviews efforts to build trustworthy AI systems, and offers future directions for research on trust relevant to the U.S. military.Download Full Report
The Department of Defense wants to harness AI-enabled tools and systems to support and protect U.S. servicemembers, defend U.S. allies, and improve the affordability, effectiveness, and speed of U.S. military operations.1 Ultimately, all AI systems that are being developed to complement and augment human intelligence and capabilities will have an element of human-AI interaction.2 The U.S. military’s vision for human-machine teaming, however, entails using intelligent machines not only as tools that facilitate human action but as trusted partners to human operators.
By pairing humans with machines, the U.S. military aims to both mitigate the risks from unchecked machine autonomy and capitalize on inherent human strengths such as contextualized judgement and creative problem solving.3 There are, however, open questions about human trust and intelligent technologies in high-risk settings: What drives trust in human-machine teams? What are the risks from breakdowns in trust between humans and machines or alternatively, from uncritical and excessive trust? And how should AI systems be designed to ensure that humans can rely on them, especially in safety-critical situations?
This issue brief summarizes different perspectives on the role of trust in human-machine teams, analyzes efforts and challenges to building trustworthy AI systems, and assesses trends and gaps in relevant U.S. military research. Trust is a complex and multi-dimensional concept, but in essence, it refers to the human’s confidence in the reliability of the system’s conclusions and its ability to accomplish defined tasks and goals. Research on trust in technology cuts across many fields and academic disciplines. But for the defense research community, understanding the nature and effects of trust in human-machine teams is necessary for ensuring that the autonomous and AI-enabled systems the U.S. military develops are used in a safe, secure, effective, and ethical way.
While the outstanding questions regarding trust apply to a broad set of AI technologies, we pay particularly close attention to machine learning systems, which are capable not only of detecting patterns but also learning and making predictions from data without being explicitly programmed to do so.4 Over the past two decades, advances in ML have vastly expanded the realm of what is possible in human-machine teaming. But the increasing complexity and unique vulnerabilities of ML systems, as well as their ability to learn and adapt to changing environments, also raise new concerns about ensuring appropriate trust in human-machine teams.
With that, our key takeaways are:
Human trust in technology is an attitude shaped by a confluence of rational and emotional factors, demographic attributes and personality traits, past experiences, and the situation at hand. Different organizational, political, and social systems and cultures also impact how people interact with technology, including their trust and reliance on intelligent systems.
That said, trust is a complex, multidimensional concept that can be abstract, subjective, and difficult to measure.
Much of the research on human-machine trust examines human interactions with automated systems or more traditional expert systems; there is notably less work on trust in autonomous systems and/or AI.
Defense research has focused less on studying trust in human-machine teams directly and more on technological solutions that “build trust into the system” by enhancing system functions and features like transparency, explainability, auditability, reliability, robustness, and responsiveness.
Such technological advances are necessary, but not sufficient, for the development and proper calibration of trust in human-machine teams.
Systems engineering solutions should be complemented by research on human attitudes toward technology, accounting for the differences in people’s perceptions and experiences, as well as the dynamic and changing environments where human-machine teams may be employed.
To advance the U.S. military vision of using intelligent machines as trusted partners to human operators, future research directions should continue and expand on:
Research and experimentation under operational conditions,
Collaborative research with allied countries,
Research on trust and various aspects of transparency,
Research on the intersection of explainability and reliability,
Research on trust and cognitive workloads,
Research on trust and uncertainty, and
Research on trust, reliability, and robustness.
Human-machine teaming is, most basically, a relationship. And like with any other relationship, understanding human-machine teaming requires us to pay attention to three sets of factors—those focused on the human, the machine, and the interactions—all of which are inherently intertwined, affecting each other and shaping trust. For the defense research community, insights from research on human attitudes toward technology and the interactions and interdependencies between humans and technology can strengthen and refine systems engineering approaches to building trustworthy AI systems. Ultimately, human-machine teaming is key to realizing the full promise of AI for strengthening U.S. military capabilities and furthering America’s strategic objectives. But the key to effective human-machine teaming is a comprehensive and holistic understanding of trust.
No comments:
Post a Comment