William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
Introduction: Generative AI and General AI
Achieving general artificial intelligence or GAI—defined as AI that replicates or exceeds most human cognitive skills across a wide range of tasks, such as image/video understanding, continual learning, planning, reasoning, skill transfer, and creativity1—is a key strategic goal of intense research efforts both in China and the United States.2 There is vigorous debate in the international scientific community regarding which path will lead to GAI most quickly and which paths may be false starts. In the United States, LLMs have dominated the discussion, yet questions remain about their ability to achieve GAI. Since choosing the wrong path can position the United States at a strategic disadvantage, this raises the urgency of examining alternative approaches that other countries may be pursuing.
In the United States, many experts believe the transformative step to GAI will occur with the rollout of new versions of LLMs such as OpenAI’s o1, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama.3 Others argue, pointing to persistent problems such as LLM hallucinations, that no amount of compute, feedback, or multimodal data sources will allow LLMs to achieve GAI.4 Still other AI scientists see roles for LLMs in GAI platforms but not as the only, or even main, component.
No comments:
Post a Comment