Kevin Frazier
There’s a tussle over the future of AI regulation.
One camp insists that “x-risk,” or existential risk, warrants the preponderance of regulatory focus. Another camp demands that privacy be the primary concern. A third cohort wants climate impacts to rise to the top of the agenda.
With U.S. politicians and agency officials unwilling to take a side, the National Institute of Standards and Technology (NIST) recently issued a "profile" on the risks generated by the research, development, deployment, and use of generative artificial intelligence (AI). Rather than concentrate on a small set of risks, NIST seemingly appeased each of the warring camps.
The NIST profile covered 12 risks, from chemical and biological to data privacy and harmful bias. Shockingly absent from the profile—“j-risk” or job risk.
J-risk is not a future concern. Americans previously employed in meaningful work have already been displaced by AI. Few signs suggest this trend will abate. Most evidence suggests it will accelerate.
AI will replace American workers—what’s less certain is when, how, and to what extent. Policymakers can avoid j-risk’s worst trend models only through the development of robust and novel social security programs aimed at displaced workers.
J-risks have been given insufficient attention in AI policy debates. Labor markets will continue to experience unexpected and significant disturbances as AI continues to advance. Rather than place excess hope on some positive economic forecasts coming true or to assume a reactive regulatory posture, lawmakers should consider pursuing anticipatory governance strategies. Two courses of action can further this approach: one, gathering more information on AI’s effects on labor and, two, creating more responsive economic security programs. These efforts would not only reduce the uncertainty surrounding j-risks but also stem the resulting long-term harms.
No comments:
Post a Comment