Micah Musser
During a hearing of the U.S. Senate Committee on Armed Services last month, Senator Mike Rounds mentioned that many prominent AI experts had just called for a six-month pause on “giant” AI experiments, largely in reaction to the announcement of GPT-4 (the current basis for ChatGPT). But Rounds had drawn a different conclusion.
“A greater risk is taking a pause while our near-peer competitors leap ahead of us in this field,” he said. “AI will be the determining factor in all future great power competition and I don’t believe that now is the time for the United States to take a break in developing our AI capabilities.”
Rounds isn’t the only person who has reached this conclusion. Ever since the meteoric rise of the new chatbot, this “AI race” frame has become increasingly common. And almost universally, China is seen as the United States’ lead competitor in the “race.”
But this narrative is wrong. It’s wrong not simply because China has a poor hope of leapfrogging the United States in the field of generative AI (though that’s true), but more importantly, because China isn’t particularly interested in leapfrogging the U.S. to begin with.
Let’s start with the immediate reaction to ChatGPT. It is true that a number of Chinese companies rushed to deploy similar products – though their actual performance has been disappointing, and their use cases sharply restricted. But at the same time, the Chinese government rapidly issued warnings about excessive hype around the technology and initiated new regulations that make it far more legally fraught to deploy similar AI systems.
Even before ChatGPT was announced, the Biden administration was making moves that could constrain China’s ability to create similar models by restricting the export of high-end computing hardware to China. According to outside experts, part of the rationale for this policy was likely that cutting-edge AI methods – in particular, the field of language modeling, which includes models like ChatGPT – are heavily dependent on advanced computing hardware.
But China’s response to these controls has also been muted, which would seem to belie assumptions that it cares much about leading the pack in language modeling. In December, China floated plans for a major subsidy package to bolster its native semiconductor industry, only to back away a month later. In March, the government appeared to settle on a solution that would offer additional subsidies to a few companies, without pouring more money overall into the sector.
China’s output in language modeling has actually been half-hearted for some time now. The announcement of ChatGPT’s predecessor, GPT-3, sparked a worldwide flurry of activity in language modeling, including in China, where new announcements were often breathlessly covered in U.S. media. (Many of these 2021-era models still lack any validation and have almost certainly been seriously overhyped.) But a new, exhaustive compilation of China’s published language models shows that Chinese activity largely died down in 2022, even as it continued to accelerate in the United States.
Taken together, this evidence suggests that China doesn’t view large language models as the transformative technology of the century. The “AI race” frame, despite being ubiquitous, overlooks three major reasons why Chinese leadership is unlikely to view advances in language modeling with the same level of concern as U.S. policymakers.
First, although China has repeatedly emphasized its view that AI is a strategic technology, it has specialized in different subfields than the United States. Relative to U.S. researchers, China has focused much more heavily on applications of AI, subfields like computer vision, and AI approaches other than machine learning. In February, the CEO of Huawei expressly stated that the company would focus its AI efforts on industrial applications – not chatbots. The United States, by contrast, has pursued a bigger relative advantage in natural language processing, which can prime U.S. analysts to view breakthroughs in language modeling as inherently more significant.
Second, language models have a tendency to make up facts. In the United States, this is a kink in a new technology. But in China, sensitivities run higher regarding the unpredictable and politically fraught comments that language models might make, which has already provoked regulations and arrests. Even if language models are a strategically valuable technology, Chinese leadership will keep them at arm’s length so long as they threaten social stability.
Finally, China and the United States have spent the last half-century on very different economic trajectories. For decades, the percentage of U.S. GDP created by professional and business services has grown, while manufacturing has fallen. In that same time frame, China’s manufacturing overtook that of the United States, and its manufacturing sector still makes up more than twice the share of GDP that the United States’ does. For an economy reliant on professional services, where ChatGPT’s automation potential is highest, the technology could enable major productivity growth. But to a country that centers its economic strategy on manufacturing, ChatGPT may not look nearly as impressive.
These nuances matter, because assuming the existence of a race over language models can be destabilizing. Just as a “race to market” can cause companies to shirk important ethics and safety issues, the race to beat China can cause U.S. leadership to passively accept the rapid deployment of poorly understood – and potentially harmful – technologies. And, because language modeling is more dependent on advanced computing than other AI subfields, fixating on ChatGPT can cause policymakers to overestimate the importance of hardware-focused policies like last fall’s export controls.
In the worst case, leaning too heavily on this approach could undermine strategic partnerships and the domestic semiconductor industry, without undercutting China’s ability to innovate in less computationally-intensive subfields. To avoid these outcomes, U.S. leaders who are excited (or fearful) about ChatGPT’s potential need to avoid projecting those emotions onto their Chinese counterparts.
And stop calling it a race.
No comments:
Post a Comment