Bob Ashley
The U.S. Defense Department is starting to get its reps in with AI.
In November last year, Deputy Secretary of Defense Kathleen Hicks released the department’s AI Adoption Strategy. Eight months later, as part of its modernization efforts, the Air Force launched NIPRGPT, “an experimental bridge to leverage GenAI on the Non-classified Internet Protocol Router Network.” Currently, Army’s Vantage program is “joining and enriching millions of data points” into AI/ML to “accelerate decisions on everything from personnel readiness to financial return on investment.”
Thus far, the military’s embrace of this formidable new technology has been, for all its complexity and challenges, both measured and maturing. Safety has been a top focus, as Deputy Secretary Hicks underscored when releasing the strategy: “Safety is critical because unsafe systems are ineffective systems.”
The full promise of AI to empower organizations with greater efficiency, effectiveness, understanding – and enable faster decisions relative to our adversaries – will impact every process, from back office functions, to warfighting across all domains. We won’t get it right at first. This will be an iterative process from which we’ll have to learn as we go. So, in the spirit of relentless improvement, what are some of the foundational questions we should be thinking about as it applies to warfighting and the application of AI/ML?
No comments:
Post a Comment