Pages

29 October 2024

The Science of AI Is Too Important to Be Left to the Scientists

Hadrien Pouget

California Gov. Gavin Newsom’s recent decision to veto SB-1047, a state bill that would set a new global bar for regulating artificial intelligence risks, was closely watched by policymakers and companies around the world. The veto itself is a notable setback for the “AI safety” movement, but perhaps even more telling, was Newsom’s explanation. He chided the bill as “not informed by an empirical trajectory analysis of Al systems and capabilities.” The words “empirical,” “evidence,” “science,” and “fact” appeared eight times in Newsom’s brief letter.

The lack of scientific consensus on AI’s risks and benefits has become a major stumbling block for regulation—not just at the state and national level, but internationally as well. Just as AI experts are at times vehemently divided on which risks most deserve attention, world leaders are struggling to find common ground. Washington and London are bracing for AI-powered biological, cyber, and information threats to emerge within the next few years. Yet their counterparts in Paris and Beijing are less convinced of the risks. If there is any hope of bridging these perspectives to achieve robust international coordination, we will need a credible and globally legitimate scientific assessment of AI and its impacts.

No comments:

Post a Comment