Samantha Lai, Ben Nimmo, Derek Ruths, and Alicia Wanless
Introduction
In 2024’s so-called year of elections, fears abounded over how generative artificial intelligence (GenAI) would impact voting around the world.1 However, as with other game-changing technologies throughout history, the sociopolitical risks of GenAI extend far beyond direct threats to democracy. As GenAI is leveraged to power “intelligent” products, made available for public use, adopted into routine business and personal activities, and used to refactor whole government and industry workflows, there are major opportunities for these disruptions to have negative consequences as well as positive ones.
These consequences will be hard to identify for two reasons. First, GenAI is being integrated into already complex processes. When the outputs of such processes change, it can be hard to trace changes back to their root causes. Second, most processes—whether in industry, government, or our personal lives—are not sufficiently well understood to allow detection of changes, especially those that are just emerging.
Informed policy that leads to beneficial change is extremely challenging to develop without being able to measure the material impacts of GenAI on governance, social services, criminal activities, health services, and myriad other aspects of social, political, and personal life. The act of measurement is necessary to help identify negative consequences that warrant prioritization and to understand whether claimed threats are over-hyped or under-recognized. Without measurement, we may fail to target policies directly towards issues that need the most attention. Worse, we may risk making changes that yield worse outcomes than the status quo.
No comments:
Post a Comment