Michael Hirsh
A little over a year ago, during a war game conducted to test artificial intelligence against human reasoning in an imagined conflict between the United States and China, a funny thing happened. The team guided by AI–powered by OpenAI’s GPT-4—proved to be more prudent, even wise, in its advice about how to handle the crisis than the human team did.
“It identified responses the humans didn’t see, and it didn’t go crazy,” Jamil Jaffer, director of the project at George Mason University’s National Security Institute, told me. Or, as Jaffer’s report concluded: “Humans consistently sought to raise the stakes and signal a willingness to confront China directly while AI played defensively and sought to limit the scope [and] nature of potential confrontation.”
No comments:
Post a Comment