By SYDNEY J. FREEDBERG JR.

One week after Dunford’s visit, the Army War College convened two dozen officers and civilian experts to take on that challenge: How do you demonstrate the potential value of a military AI before you actually build it? (The conference was scheduled long before Dunford’s visit, but his words were very much on participants’ minds). The immediate objective: come up with ways to mimic the effects of an AI so the school’s in-house game designers could turn it into either a computer simulation or a table-top exercise within 10 months — without new money. The hope is that the 2020 game, in turn, will intrigue Army leadership enough that they’ll support a larger, longer-term AI effort.
This squid’s thought process is less alien to you than an artificial intelligence would be.

The most famous case is the Naval War College in the 1920s and 1930s, which ran more than 100 games exploring a possible war with Japan, often with officers moving miniature ships around a tiled floor used as a giant gameboard. Admiral Nimitz famously said that these games explored so many different technologies and strategies that “nothing that happened during the [actual] war was a surprise… except the kamikaze tactics.”
But wargaming Artificial Intelligence is a much harder problem. If you’re trying to figure out how a new weapon might be used, the way the Naval War College did with the aircraft carrier in the 1930s, you can add new pieces to your board and new options to your rules: How does the game change if, for example, you let some of your ships launch aircraft x miles to bomb targets with y percent chance of destroying them?
But AI is not a physical weapon. AI is a machine that thinks. It’s not too hard to change your wargame’s rules to simulate planes that can fly farther, ships that sail faster, tanks that are tougher to kill or satellites that transmit messages faster. But how do you change your game to simulate one side getting smarter?
Harder still: How do human game designers and human players simulate the military decision-making of a non-human artificial intelligence, when AI’s crucial advantage is it can think of strategies no human ever could?
Reenacting the Battle of Jutland on the famed tiled floor of the Naval War College
Game Changers
The Army War College doesn’t have the time or money to build a superhumanly intelligent AI to play its wargames. Making such a mega-brain is probably years away for anyone. Even high-priority Pentagon AI programs are focusing, for now, on improving technical functions like maintenance, logistics, cybersecurity, electronic warfare, and missile defense — not building robot strategists.
You can’t eliminate this problem, but there are two ways to reduce it. One is to simulate how future super-smart AIs will handle the staggering complexities of the real world by testing how today’s relatively limited AIs handle the limited complexities of a simplified model. The other is to give human players some kind of advantage that helps them out-think their opposition, simulating how warfare changes when one side consistently thinks faster and better.
It’s remarkable how many ways the Army War College conference came up with to simulate AI without using any new technology at all. (I participated as an invited expert — with the Army covering my expenses — so some of these ideas are partially mine).
US and allied officers play a tabletop wargame at the Command & General Staff College, Fort Leavenworth, KS

What if both sides aren’t seeing the same game board? That’s a common feature both of consumer strategy games, in which each player only sees the parts of the map around their units, and of formal War College exercises, in which each side is in a different room and gets all its information from a neutral umpire. This allows you to simulate AI by giving one side more or better information.
For example, one high-priority application for AI is rapidly collecting, analyzing, and disseminating vast amounts of data — say, video from surveillance drones — that would take human intelligence analysts much longer to plow through. You could simulate that capability by giving one side information faster than the other. Perhaps the umpire tells the humans playing the “AI-enabled” side where the pieces are on the board right now, but it only tells the non-AI team where the pieces were last turn, or two turns ago.
You can use a similar kind of lag to simulate how AI might automate a lot of routine staff work. It’s an arduous process to turn a commander’s scheme of maneuver into detailed timetables of what unit takes what route when, what air and artillery support they’ll have on call, when they’ll meet up with supply convoys carrying ammunition, food, or fuel, et cetera ad nauseam. It’s also the kind of work which human’s hunter-gatherer brains aren’t evolved to do, but which computers are great at. So maybe the side simulating the AI-enabled headquarters can move its pieces on the board right now, but when the non-AI side issues orders, it takes a turn or two before its pieces actually carry them out.
Or you could combine both approaches. The side simulating AI gets to see the board and move its pieces then and there, but the non-AI side only knows where its pieces were last turn and can only issue orders for what they’ll do next turn.
The Army also envisions AI being able to transmit intelligence and orders rapidly over secure wide-area networks, letting dispersed command posts work closely together despite the distance. You could simulate this by putting all the players on the AI-enabled team in the same room, while splitting the other team up. The division commander and his staff are in one room, his brigade commanders are each in their own rooms, and any communication has to go through the umpire. Now combine this with lag. All the AI players can communicate freely face-to-face, but if the non-AI commander tells his brigades to do something, they won’t actually receive that order until the next turn.
Lockheed Martin’s third Multi-Domain Command & Control (MDC@) wargame

Once you put your wargame on a computer, you can also start replacing human players with real AI. Sure, it won’t be as sophisticated as the future AI strategist you’re emulating, but the simulated world it’s playing in isn’t as complicated as the real world, either. And there are a lot of AIs available already you can repurpose to play your particular game. “There’s no need to reinvent game-solving AIs,” said one participant. “That’s already been done — and productized.”
Those AIs are also getting better all the time. If you write your software right — specifically, the Application Programming Interface (API) — then you can plug in new and smarter AIs as they become available. Even in the long run, when you finally develop an AI strategist that could plan an actual battle, you’ll really want to test it in simulation before you stake human lives on its performance.
No comments:
Post a Comment