PATRICK TUCKER
The prevailing "bigger-is-better" approach to artificial intelligence—ingest more training data, produce larger models, build bigger data centers—might be undermining the kind of research and development the U.S. military actually needs now and in the future.
That’s the argument in "Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI," a new paper that scrutinizes common assumptions driving AI research. Its authors demonstrate that the performance of larger models doesn’t necessarily justify the vastly increased resources needed to build and power them. They also argue that concentrating AI efforts in a relative handful of big tech companies adds geopolitical risks.
Broadly speaking, the Defense Department is pursuing AI along two tracks: large models that require enormous computational resources, and smaller, on-platform AI that can function disconnected from the internet. In some ways, the study validates the second approach. But, the authors note, future research in “small AI” could be limited due to growing influence of large AI providers.
No comments:
Post a Comment