Huw Roberts, Emmie Hine, Mariarosaria Taddeo, Luciano Floridi
Late 2022 and early 2023 saw the commercialization of powerful new artificial intelligence (AI) technologies such as OpenAI's ChatGPT. These systems have numerous benefits, including improving business efficiency and enhancing consumer experiences, but also pose significant risks. They threaten national security by democratizing capabilities that could be used by malicious actors; facilitate unequal economic outcomes by concentrating market power in the hands of a few companies and countries, while displacing jobs in others; and produce societally undesirable conditions through extractive data practices, reinforcing biased narratives and environmentally harmful compute requirements.
These risks transcend national borders and have reinvigorated calls for stronger global AI governance, understood here as the process through which diverse interests that transcend borders are accommodated, without a single sovereign authority, so that cooperative action may be taken in maximizing the benefits and mitigating the risks of AI.2 The United Nations Secretary-General António Guterres, British Prime Minister Rishi Sunak and OpenAI CEO Sam Altman have all argued for the creation of a new international AI body modelled on existing institutions like the Intergovernmental Panel on Climate Change (IPCC) and the International Atomic Energy Agency (IAEA). A new-found emphasis on global AI governance is promising, but this type of ambitious governance proposal is generally misaligned with current geopolitical and institutional realities, raising questions over desirability and feasibility.
No comments:
Post a Comment