Karson Elmgren
Given the transnational risks posed by AI, the safety of AI systems, wherever they are developed and deployed, is of concern to the United States. Since China develops and deploys some of the world's most advanced AI systems, engagement with this U.S. competitor is especially important.
The U.S. AI Safety Institute (AISI)—a new government body dedicated to promoting the science and technology of AI safety—is pursuing a strategy that includes the creation of a global network of similar institutions to ensure AI safety best practices are “globally adopted to the greatest extent possible.”
As with cooperation with the Soviet Union during the Cold War on permissive action links (PALs), a technology for ensuring control over nuclear weapons, the United States may again wish to keep its competitors safer to assure its own safety. The PALs case also shows how a track record of engagement between subject matter experts can be critical to enabling cooperation later. However, as with PALs, care must be taken to make sure that in helping make Chinese AI safer, the United States does not also help it advance its AI capabilities. For this purpose, the safer bet may be avoiding cooperation on technical matters and focusing instead on topics such as risk management protocols or incident reporting.
No comments:
Post a Comment