7 July 2024

Advancing AI safety requires international collaboration. Here’s what should happen next.

Courtney Lang

Artificial Intelligence (AI) is advancing. So, too, is international collaboration to ensure that advances are made in a safe and responsible way. In May, ten countries and the European Union (EU) met in South Korea and signed the “Seoul Statement of Intent toward International Cooperation on AI Safety Science,” which establishes an international network of AI safety institutes. This agreement builds on measures that several of its signatories have taken on AI safety since the Bletchley Park summit in November 2023. Since November, for example, the United Kingdom, the United States, Japan, and Singapore have established AI safety institutes, and the EU has set up an AI office with a unit dedicated to safety.

The Statement of Intent also builds on existing bilateral agreements. At the EU-US Trade and Technology Council meeting held in April 2024, the EU and the United States announced that the AI Office and the US AI Safety Institute would work together to develop the tools needed to evaluate AI models. Additionally, ahead of the Seoul Summit, the US AI Safety Institute signed a memorandum of understanding with the United Kingdom’s AI Institute, also aimed at building out a shared approach to AI safety, with an emphasis on developing testing and evaluation metrics.

The Statement of Intent signed at the Seoul Summit represents an important step forward in the AI safety conversation. It demonstrates both increasing international interest in and commitment to advancing the science needed to promote AI safety. To be successful in its implementation, however, the signatory countries will need to prioritize the most pressing areas of need for scientific practices, deepen their engagement with international standards-setting bodies, and collaborate with stakeholders across the AI ecosystem.

No comments: