Oona Lagercrantz
How do you reconcile 1,000 stakeholder views on a fast-moving technology, predicted to define the 21st century, in just two weeks? Europe’s AI Office, empowered to enforce the AI Act – the world’s first law governing artificial intelligence systems – is struggling to come up with answers.
As deadlines loom, the new legislation – which aims to set a global standard for trustworthy AI – is generating major conflicts and complaints. At stake is the legitimacy of the AI Act and the EU’s aspiration to be the “global leader in safe AI.”
According to the AI Act, providers of general-purpose AI systems, such as ChatGPT4, must implement a range of risk mitigation measures and ensure transparency and high-quality data sets. The AI Office is drafting a Code of Practice (“the Code”) which outlines practical guidelines for compliance. Since obligations for general-purpose AI providers come into force in August 2025, a finalized Code of Practice is due in April. It’s an all-too-short-timeline, stakeholders say.
The AI Office is consulting approximately 1,000 stakeholders to write the Code, including businesses, national authorities, academic researchers, and civil society. It published a first draft in mid-November 2024, giving stakeholders a mere 10 days to provide feedback. Hundreds of written responses poured in. A second draft – acknowledging the “short timeframe” – was presented on December 19, forcing stakeholders to send feedback over the holiday period.
No comments:
Post a Comment