BY AARON BOYD

Work made a hard distinction between general AI and more narrow forms—what he preferred to call machine intelligence and algorithmic warfare. “Improved autonomy is going to be the natural result of machine intelligence. We always use artificial intelligence, but I think machine intelligence is more accurate,” he said. “It’s the programmed ability to process information and, this is key, to make decisions as well as or better than human beings, and faster.”
He offered an example of a smart missile that is able to assess a situation, choose the best target based on preset parameters. In the example, the missile spots an enemy tank formation, determines which is the command tank and picks the most lethal form of attack to incapacitate the enemy.
“That is a good thing. It’s a totally autonomous weapon and it will do what it’s asked to do,” Work said. “It won’t say, ‘Hey, I woke up this morning and I decided I want to shoot down an airplane.’ A general AI system sets its own goals and can change them. No one in the Department of Defense is saying we ought to go toward those type of weapons.”
Work referenced the department’s Project Maven, a high-profile Pentagon program that made headlines when Google employees petitioned their company to remove itself from the project for ethical reasons. The former deputy secretary noted the true name inside the Pentagon is the Algorithmic Warfare Cross-Functional Team, with a focus on merging human and machine intelligence in a way that improves human decision-making.
“Human-machine collaboration is using machines to make better human decisions,” he said. “The human is always in front in terms of DOD thinking.”
No comments:
Post a Comment