Michael Cody
The military doctrine now seeks to employ artificial intelligence as part of its operational arsenal, framed largely as a question of speed, efficiency, and competitive necessity. This framing is familiar and, in some ways, understandable. Large institutions tend to adopt new technologies first as tools, only later as systems that require governance of their own. The problem is not that the Department of Defense is experimenting with generative models, but that this experimentation is already being normalized as routine infrastructure rather than treated as a fragile and high-risk intervention. Defense reporting indicates that the Department is already tracking more than one million unique users on its enterprise generative AI platform within months of launch, a scale that would have been unthinkable for experimental systems only a few years ago. That imbalance matters, because data control, operational language, and internal reasoning patterns are not abstract assets. They are the connective tissue of military power, and once they are externalized through probabilistic systems, they cannot simply be pulled back inside by policy assurances or procedural checklists.
It is important to be clear about what this concern is not. This is not an argument about model autonomy, emergent behavior, or speculative future intelligence. The dominant risk does not arise from what large language models might decide to do on their own. It arises from how humans use them when they become convenient. The Department of Defense has decades of experience managing classified information, enforcing compartmentalization, and responding to breaches. There is no reason to believe that this institutional knowledge has vanished.