Eric Velte and Aaron Dant
The use of artificial intelligence in combat poses a thorny ethical dilemma for Pentagon leaders. The conventional wisdom is that they must choose between two equally bad alternatives: either enforce full human supervision of the AI systems at the cost of speed and accuracy or allow AI to operate with no supervision at all.
In the first option, our military builds and deploys “human in the loop” AI systems. These systems adhere to ethical standards and the laws of war but are limited by the abilities of the human beings that supervise them. It is widely believed that such systems are doomed to be slower than any unsupervised, “unethical” systems used by our adversaries. The unethical autonomous systems appear to boast a competitive edge that, left unchallenged, has the potential to erode Western strategic advantage.
The second option is to completely sacrifice human oversight for machine speed, which could lead to unethical and undesirable behavior of AI systems on the battlefield.
Realizing that neither of these options is sufficient, we need to embrace a new approach. Much like the emergence of the cyber warrior in the realm of cybersecurity, the realm of AI requires a new role – that of the “AI operator.”
With this approach, the objective is to establish a synergistic relationship between military personnel and AI without compromising the ethical principles that underpin our national identity.
We need to strike a balance between maintaining the human oversight that informs our ethical framework and adopting the agility and response time of automated systems. To achieve this, we must foster a higher level of human interaction with AI models than simply stop/go. We can navigate this complex duality by embedding the innate human advantages of diversity, contextualization, and social interaction into the governance and behavior of intelligent combat systems.
What we can learn from ancient war elephants
Remarkably, a historical precedent exists that parallels the current challenge we face in integrating AI and human decision-making. For thousands of years, “war elephants” were used in combat and logistics across Asia, North Africa, and Europe. These highly intelligent creatures required specialized training and a dedicated operator, or “mahout”, to ensure the animals would remain under control during battles.
War elephants and their mahouts provide a potent example of a complementary relationship. Much like we seek to direct the speed and accuracy of AI on the battlefield, humans were once tasked with directing the power and prowess of war elephants -- directing their actions and minimizing the risk of unpredictable behavior.
Taking inspiration from the historical relationship between humans and war elephants, we can develop a similar balanced partnership between military personnel and AI. By enabling AI to complement, rather than replace, human input, we can preserve the ethical considerations central to our core national values while still benefiting from the technological advancements that autonomous systems offer.
Operators as masters of AI
The introduction and integration of AI on the battlefield presents a unique challenge, as many military personnel do not possess intimate knowledge of the development process behind AI models. These systems are often correct, and as a result, users tend rely too heavily on their capabilities, oblivious to errors when they occur. This phenomenon is referred to as the “automation conundrum” – the better a system is, the more likely the user is to trust it when it is wrong, even obviously so.
To bridge the gap between military users and the AIs upon which they depend, there needs to be a modern mahout, or AI operator. This specialized new role would emulate the mahouts who raised war elephants: overseeing their training, nurturing, and eventual deployment on the battlefield. By fostering an intimate bond with these intelligent creatures, mahouts gained invaluable insight into the behavior and limitations of their elephants, leveraging this knowledge to ensure tactical success and long-term cooperation.
AI operators would take on the responsibilities of mahouts for AI systems, guiding their development, training, and testing to optimize combat advantages while upholding the highest ethical standards. By possessing a deep understanding of the AI for which they would be responsible, these operators serve as liaisons between advanced technology and the warfighters that depend on them.
Diverse trainers, models can overcome risk of system bias
Just as war elephants and humans possess their own strengths, weaknesses, biases, and specialized abilities, so do AI models. Yet, due to the cost of building and training AI models from scratch, the national security community has often opted for tweaking and customizing existing “foundation” models to accommodate new use cases. While this approach may seem logical on the surface, it amplifies risk by building upon models with exploitable data, gaps, and biases.
This approach envisions the creation of AI models by different teams, each utilizing unique data sets and diverse training environments. Such a shift would not only distribute the risk of ethical gaps associated with individual models but also provide AI operators with a broader array of options, tailored to meet changing mission needs. By adopting this more nuanced approach, AI operators can ensure AI’s ethical and strategic application in warfare, ultimately strengthening national security and reducing risk.
Mahouts who trained their war elephants did not do so with the intention of sending these magnificent creatures into battle alone. Rather, they cultivated a deep symbiotic relationship, enhancing the collective strengths of both humans and animals through cooperation and leading to greater overall outcomes. Today’s AI operators can learn from this historical precedent, striving to create a similar partnership between humans and AI in the context of modern warfare.
By nurturing the synergy between human operators and AI systems, we can transform our commitment to ethical values from a perceived limitation into a strategic advantage. This approach embraces the fundamental unpredictability and confusion of the battlefield by leveraging the combined strength of human judgment and AI capabilities. Furthermore, the potential for this collaborative method extends beyond the battlefield, hinting at additional applications where ethical considerations and adaptability are essential.|
Eric Velte is Chief Technology Officer, ASRC Federal, the government services subsidiary of Arctic Slope Regional Corp., and Aaron Dant is Chief Data Scientist, ASRC Federal Mission Solutions.
No comments:
Post a Comment