Pages

13 September 2024

AI and the A-bomb: What the analogy captures and misses

Kevin Klyman, Raphael Piliero

When OpenAI released ChatGPT in the fall of 2022, generative AI went global, gaining one million users in days and 100 million in months. As the world began to grapple with AI’s significance, policymakers asked: Will artificial intelligence change the world or destroy it? Would AI democratize access to information, or would it be used to rapidly spread disinformation? When used by the military, could it be used to spawn “killer robots” that make wars easier to wage?

Technologists and bureaucrats scrambled to find ways to understand and forecast generative AI’s impact. What other revolutionary technological achievement combined the hope of human advancement with the lingering dangers of massive societal destruction? The obvious analogue was nuclear weapons. Within months, some of the leading scientists in machine learning signed a letter that claimed “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Elon Musk went further, stating AI poses a “significantly higher risk” than nuclear weapons. United Nations Secretary-General António Guterres proposed creating an International Atomic Energy Agency equivalent to promote the safe use of AI technology, while OpenAI CEO Sam Altman has suggested a Nuclear Regulatory Commission for AI, akin to the agency that regulates the operation of nuclear plants in the United States.


No comments:

Post a Comment