Pages

3 April 2024

China Walks Perilous AI Tightrope

Emmie Hine

China has some of the world’s most advanced and toughest artificial intelligence regulations. It also has 40% of AI patent filings.

The apparent contradiction is influential, challenging the adage that “regulation stifles innovation.” European policymakers argue to the contrary, that strong rules spark innovation. US policymakers, while skeptical, fear China’s AI advances and are looking towards rules to promote AI with “democratic values.”

Everyone should take a closer look. China’s supposed success in balancing innovation and regulation is shaky. Regulations are reactive. Enforcement is lax. AI companies are allowed to innovate but must be cautious. If they — or their products — challenge the Communist Party’s control, the government will step in.

The Party’s goal is to preserve its two “miracles” of economic development and social stability. Social stability requires information control, so China imposes strict Internet restrictions and extensive censorship. AI supports economic development and can facilitate social control through, for example, urban facial recognition systems. But as a dual-use, destabilizing technology, it also threatens social stability, especially information control. A dangerous tightrope walk is required to maintain both “miracles.”

China’s AI regulations are often characterized as proactive and stringent. Since 2021, Beijing has birthed proposals and laws on recommendation algorithms, synthetic content, generative AI, and facial recognition. It so far has avoided enacting a broad, horizontal regulation such as Europe’s AI Act, though it is considering such a move.

Yet Chinese regulations are more reactive than proactive. Consider the groundbreaking rules on recommendation algorithms. They were triggered by public outrage following an exposé on the plight of food delivery workers, whose algorithmically set delivery targets pressured them into risky traffic decisions. When ByteDance’s CEO maintained that its Toutiao news app was a neutral content provider with no responsibility to promote certain “values”, Beijing balked. It issued blunt disciplinary measures, “reined in algorithms used for information dissemination,” and required online platforms to promote “mainstream values.”

China’s deepfake regulation similarly responded to, not prevented, danger. In 2019, an AI face-swapped video went viral. Within two months, the government amended the civil code to protect image rights and committees began work on a “deep synthesis” regulation. ChatGPT’s launch similarly jolted the authorities. Regulators, alarmed by the potentially destabilizing effects of generated text, soon released draft measures on generative AI. In response, Apple removed dozens of unlicensed generative AI apps from its app store.

It is often assumed that Chinese AI regulations are harshly enforced. In fact, regulation is spotty. The licensing requirement for generative AI services has been unofficially eased, with providers now seemingly only required to file security assessments. This can be seen as a concession to a desire to spark AI innovation.

But the tacit understanding is that extensive Chinese AI regulation provides a “hammer” to nail down future threats to stability. When the Party launched its 2020-2021 “tech crackdown,” it wielded a previously unenforced 2008 antitrust law. China has assembled a large toolkit of AI measures both to promote innovation and to make clear who’s in control. The “tech crackdown” was not an anomaly; it is an example of the Party’s style of regulation.

At the moment, China seems to be succeeding in its AI tightrope walk. Chinese large language models rank comparably with models from OpenAI and Anthropic, while toeing the party line. If, down the line, development becomes disruptive, the Communist Party has an ever-growing stack of laws to bring AI companies back in line.

Where does this leave the EU? While regulation does not necessarily stifle innovation, it does not automatically create it, either. The EU needs to do more to support entrepreneurs especially the SMEs and start-ups that can’t hire hundreds of staff to focus on compliance. But Brussels will want to evenly enforce its forthcoming AI Act and cannot fall back on extrajudicial actions.

Success will instead require a European-stye tightrope walk. European regulators must provide proactive compliance support, facilitate start-up access to computational resources, and quickly release promised codes of conduct — while ensuring that AI innovation supports the continent’s economy and its values.

No comments:

Post a Comment