For almost 20 years, mission command has been a key component of command and control (C2) in the U.S. Army. However, with the advancements in the realm of artificial intelligence and the resultant utilization of autonomous and semiautonomous weapon systems in warfare, it is necessary to examine the extent to which these machines can cooperate within this construct.
Mission command, properly understood, empowers subordinate decisionmaking and decentralized execution appropriate to any given situation. It is solely meant for human-to-human C2. Like war itself, it is an inherently “human endeavor . . . not a mechanical process that can be precisely controlled by machines [or] calculations.” Systems that use machine algorithms for their decisionmaking processes are in direct variance to the emotive- and moral-seeking components of human cognition. Humans experience love, fear, camaraderie and hate—machines do not. Nor do they understand honor, integrity or self-sacrifice. Faced with this conflict, how can the deployment of machines work in concert with the Army’s C2?
No comments:
Post a Comment