Pages

21 July 2023

A quick guide to the most important AI law you’ve never heard of

Melissa Heikkiläarchive

It’s a Wild West out there for artificial intelligence. AI applications are increasingly used to make important decisions about humans’ lives with little to no oversight or accountability. This can have devastating consequences: wrongful arrests, incorrect grades for students, and even financial ruin. Women, marginalized groups, and people of color often bear the brunt of AI’s propensity for error and overreach.

The European Union thinks it has a solution: the mother of all AI laws, called the AI Act. It is the first law that aims to curb these harms by regulating the whole sector. If the EU succeeds, it could set a new global standard for AI oversight around the world.

But the world of EU legislation can be complicated and opaque. Here’s a quick guide to everything you need to know about the EU’s AI Act. The bill is currently being amended by members of the European Parliament and EU countries.
What’s the big deal?

The AI Act is hugely ambitious. It would require extra checks for “high risk” uses of AI that have the most potential to harm people. This could include systems used for grading exams, recruiting employees, or helping judges make decisions about law and justice. The first draft of the bill also includes bans on uses of AI deemed “unacceptable,” such as scoring people on the basis of their perceived trustworthiness.

The bill would also restrict law enforcement agencies’ use of facial recognition in public places. There is a loud group of power players, including members of the European Parliament and countries such as Germany, that want a full ban or moratorium on its use in public by both law enforcement and private companies, arguing that the technology enables mass surveillance.

If the EU manages to pull this off, it would be one of the strongest curbs yet on the technology. Some US states and cities, such as San Francisco and Virginia, have introduced restrictions on facial recognition, but the EU’s ban would apply to 27 countries and a population of over 447 million people.

How will it affect citizens?

In theory, it should protect humans from the worst side effects of AI by ensuring that applications face at least some level of scrutiny and accountability.

People can trust that they will be protected from the most harmful forms of AI, says Brando Benifei, an Italian member of the European Parliament, who is a key member of the team amending the bill.
Related Story

After years of activists fighting to protect victims of image-based sexual violence, deepfakes are finally forcing lawmakers to pay attention.

The bill requires people to be notified when they encounter deepfakes, biometric recognition systems, or AI applications that claim to be able to read their emotions. Lawmakers are also debating whether the law should set up a mechanism for people to complain and seek redress when they have been harmed by an AI system.

The European Parliament, one of the EU institutions working on amending the bill, is also pushing for a ban on predictive policing systems. Such systems use AI to analyze large data sets in the interest of preemptively deploying police to crime-prone areas or to trying to predict a person’s potential criminality. These systems are highly controversial, and critics say they are often racist and lack transparency.

What about outside the EU?

The GDPR, the EU’s data protection regulation, is the bloc’s most famous tech export, and it has been copied everywhere from California to India.

The approach to AI the EU has taken, which targets the riskiest AI, is one that most developed countries agree on. If Europeans can create a coherent way to regulate the technology, it could work as a template for other countries hoping to do so too.

“US companies, in their compliance with the EU AI Act, will also end up raising their standards for American consumers with regard to transparency and accountability,” says Marc Rotenberg, who heads the Center for AI and Digital Policy, a nonprofit that tracks AI policy.

The bill is also being watched closely by the Biden administration. The US is home to some of the world’s biggest AI labs, such as those at Google AI, Meta, and OpenAI, and leads multiple different global rankings in AI research, so the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo, and Lynne Parker, who is leading the White House’s AI effort, have welcomed Europe’s effort to regulate AI.

“This is a sharp contrast to how the US viewed the development of GDPR, which at the time people in the US said would end the internet, eclipse the sun, and end life on the planet as we know it,” says Rotenberg.

Despite some inevitable caution, the US has good reasons to welcome the legislation. It’s extremely anxious about China’s growing influence in tech. For America, the official stance is that retaining Western dominance of tech is a matter of whether “democratic values” prevail. It wants to keep the EU, a “like-minded ally,” close.
What are the biggest challenges?

Some of the bill’s requirements are technically impossible to comply with at present. The first draft of the bill requires that data sets be free of errors and that humans be able to “fully understand” how AI systems work. The data sets that are used to train AI systems are vast, and having a human check that they are completely error free would require thousands of hours of work, if verifying such a thing were even possible. And today’s neural networks are so complex even their creators don’t fully understand how they arrive at their conclusions.

Tech companies are also deeply uncomfortable about requirements to give external auditors or regulators access to their source code and algorithms in order to enforce the law.

“The current drafting is creating a lot of discomfort because people feel that they actually can’t comply with the regulations as currently drafted,” says Miriam Vogel, who is the president and CEO of EqualAI, a nonprofit working on reducing unconscious bias in AI systems. She also chairs the newly founded National AI Advisory Committee, which advises the White House on AI policy.

There’s also a giant fight brewing over whether the AI Act should ban the use of facial recognition outright. It’s contentious because EU countries hate it when Brussels tries to dictate how they should handle matters of national security or law enforcement. Several countries, such as France, want to make exceptions for using facial recognition to protect national security. In contrast, the new government of Germany, another big European country and an influential voice in EU decision making, has said it supports a full ban on the use of facial recognition in public places.

Another big fight will be over what kinds of AI get classified as “high risk.” The AI Act has a list that ranges from lie detection tests to systems used to allocate welfare payments. There are two opposing political camps—one fearing that the vast scope of the regulation will slow down innovation, and the other arguing that the bill as written will not do enough to protect people from serious harm.
Won’t this stifle innovation?

A common criticism from Silicon Valley lobbyists is that the regulation will create extra red tape for AI companies. Europe disagrees. The EU counters that the AI Act will only apply to the riskiest set of AI uses, which the European Commission, the EU’s executive arm, estimates would apply to just 5 to 15% of all AI applications.

Tech companies “should be reassured that we want to give them a stable, clear, legally sound set of rules so that they can develop most of AI with very limited regulation,” says Benifei.

Organizations that don’t comply face fines of up to €30 million ($31 million) or, for companies, up to 6% of total worldwide annual revenue. And experience shows that Europe is not afraid to dish out fines to tech companies. Amazon was fined €746 million ($775 million) in 2021 for breaching the GDPR, and Google was fined €4.3 billion ($4.5 billion) in 2018 for breaching the bloc’s antitrust laws.
When will it come into effect?

It will be at least another year before a final text is set in stone, and a couple more years before businesses will have to comply. There is a chance that hammering out the details of such a comprehensive bill with so many contentious elements could drag on for much longer. The GDPR took more than four years to negotiate, and it was six years before it entered into force. In the world of EU lawmaking, anything is possible.

No comments:

Post a Comment