1 July 2024

4 Ways China Gets Around US AI Chip Restrictions

Che-Jen Wang

The recently concluded Computex 2024 in Taipei gathered the world’s most renowned computer manufacturers, and invited an unprecedentedly large number of CEOs of chip manufacturers to be its keynote speakers. The themes of the exhibition were artificial intelligence (AI), green energy sustainability, and innovation, with particular emphasis on the arrival of the 3 nm process era in AI. The 3 nm GPU products introduced at the keynote include Nvidia’s Rubin platform, Intel’s Lunar Lake, AMD’s MI350 series, and even ARM’s v9.2 architecture based on 3 nm.

In the AI field, the difference of computing power between 7 nm and 3 nm chips lies in the number of transistors. Comparing Nvidia’s 7 nm A100 with the company’s 4nm B200, the number of transistors increases significantly from 54.2 billion to 208 billion, nearly quadrupling. In terms of half-precision floating point (FP16) computation, the B200 delivers 2,250 TFLOPS, while the A100 delivers 312 TFLOPS, a more than seven-fold increase. Taking into account the performance of peripheral components and ecosystems, the actual computing power of the 3 nm chipset far surpasses the multiplier mentioned above.

The goals of the Biden administration’s technology policy – described as a “small yard, high fence” approach – are to impede, cripple, and delay China’s development of precisely this kind of advanced chip technology. By doing so, Washington seeks to halt China’s progress in AI and high-performance computing (HPC) capabilities and thereby buy time for the U.S. and its allies to expand their lead in cutting-edge technology. But so far, the measures have seen only limited success.

No comments: