Huey-Meei Chang
In late 2020, China established the Beijing Institute for General Artificial Intelligence, a state-backed institution dedicated to building software that emulates or surpasses human cognition in many or all of its aspects. Open source materials now available provide insight into BIGAI’s goals, scope, organization, methodology, and staffing. The project formalizes a trend evident in Chinese AI development toward broadly capable (general) AI.
Executive Summary
China’s Beijing Institute for General Artificial Intelligence (BIGAI), established with state backing in 2020, aims openly at artificial general intelligence (AGI) and is assembling the talent and organizational means to pursue it.
The project’s core is an elite team of Chinese- and U.S.-educated scientists managed by former University of California, Los Angeles (UCLA) researcher Zhu Songchun, whose work in precursor disciplines, professional network, and openness to methodological alternatives lend credibility to the project.
The present study—an introduction to BIGAI’s goals, staffing, and research—situates this AGI project in the context of China’s overall work toward advanced artificial intelligence.BIGAI by choice is not pursuing the massively large natural language models championed by OpenAI, Google, and other U.S. and British companies.BIGAI focuses instead on developing AGI through alternative “small data, big task” research on brain cognition and neuroscience.The organization has recruited some 30 top scientists educated at leading U.S. and UK research universities, several of whom were trained under U.S. government programs.Zhu described his AGI project to high-level state bodies as being on a par with China’s storied “two bombs, one satellite” programs in terms of its importance.
Given the strategic impact of a successful Chinese AGI program, this study encourages U.S. and allied government policymakers to pay closer attention to China’s AI development, through open sources especially, as a foundation for practical engagement.
No comments:
Post a Comment