3 September 2024

China’s Views on AI Safety Are Changing—Quickly

Matt Sheehan

Over the past two years, China’s artificial intelligence (AI) ecosystem has undergone a significant shift in how it views and discusses AI safety. For many years, some of the leading AI scientists in Western countries have been warning that future AI systems could become powerful enough to pose catastrophic risks to humanity. Concern over these risks—often grouped under the umbrella term “AI safety”—has sparked new fields of technical research and led to the creation of governmental AI safety institutes in the United States, the United Kingdom, and elsewhere. But for most of the past five years, it was unclear whether these concerns about extreme risks were shared by Chinese scientists or policymakers.

Today, there is mounting evidence that China does indeed share these concerns. A growing number of research papers, public statements, and government documents suggest that China is treating AI safety as an increasingly urgent concern, one worthy of significant technical investment and potential regulatory interventions. Momentum around AI safety first began to build within China’s elite technical community, and it now appears to be gaining some traction in the country’s top policy circles. In a potentially significant move, the Chinese Communist Party (CCP) released a major policy document in July 2024 that included a call to create “oversight systems to ensure the safety of artificial intelligence.”

No comments: