11 February 2025

Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference?

Mia Hoffmann, Mina Narayanan, Owen J. Daniels

Early next week, Paris will host the French Artificial Intelligence Action Summit, yet another global convening focused on harnessing the power of AI for a beneficial future. One of the conference’s key themes is devising structures to employ AI for good, with the primary aim being “to clarify and design a shared and effective governance framework with all relevant actors.”

While the summit’s intent is admirable, this goal has been attempted numerous times with limited success, given the challenges of getting nations with different priorities on the same AI page. It also ties into a broader concern for artificial intelligence in 2025, namely, how (and even whether) governments and companies creating AI will approach developing and controlling powerful new AI systems in a responsible way. In the past few weeks alone, China’s DeepSeek R1—a model approaching OpenAI’s o1 performance at a reportedly much lower cost—hit the market, President Trump announced the OpenAI-Softbank-Oracle Stargate Initiative, a $500 billion plan to build data and computing infrastructure in the United States, and his administration quickly rescinded the Biden administration’s executive order focused on AI safety and testing standards.

New models are arriving on the scene and massive business interests hope to drive AI advancements forward, full steam ahead. Safety has largely been given lip-service, if even that.

No comments: