By Sam duPont
As protests grow, the police leverage their technological advantage. Widespread surveillance cameras capture the crowds, and facial recognition software picks out individuals, cross-referencing databases to generate dossiers on protestors. Demonstrators counter by cutting down lamp posts that might conceal cameras. Across the country, 3,000 miles away, biometric systems monitor everyday people as they go about their lives. These tools enable police to control a population and detain suspects before they even commit a crime.
We are, of course, in China, where law enforcement authorities in Hong Kong have deployed high-tech surveillance systems to quash protests, and the Chinese government uses facial recognition and other technologies in Xinjiang to control and repress the Uighur minority population. The United States is not China when it comes to the surveillance of its people using biometric technologies like facial recognition—but that is not the result of any law or policy on the books today.
At demonstrations across the United States to protest the police killing of George Floyd, American law enforcement authorities have made liberal use of the tools at their disposal and taken full advantage of the regulatory lacuna surrounding high-tech surveillance. And while police use of facial recognition to monitor recent protests has rung alarm bells for many, this practice is neither new nor rare. Even as facial recognition software has improved dramatically in recent years, legal controls on its use have utterly failed to keep up.
How is facial recognition being deployed by law enforcement, and why should we worry?
Reporting from BuzzFeed revealed that the Minneapolis Police Department is one among hundreds of law enforcement agencies—along with thousands of private entities—using the services of Clearview AI. Clearview built its facial recognition software by scraping more than 3 billion images from Facebook and other websites, and can compare a face captured on a security camera against their database to reveal possible matches. This plug-and-play facial identification system raises obvious privacy concerns: a right “to be let alone” ought to include a right not to have our faceprints scraped from the Internet without our consent so that we can be identified as we move through the world.
A growing crowd of advocates has raised concerns about abuse of facial recognition, and corporations have slowly started to take notice. Last week, IBM announced that it would no longer offer facial recognition software, while Amazon and Microsoft both suspended the supply of facial recognition services to law enforcement authorities. All three companies cited ethical concerns around the use of this technology and argued for government action to regulate its use. These moves are positive, but we cannot rely on corporate forbearance to prevent these tools from being abused. These companies have backed away from facial recognition only in the limelight of recent protests; and other companies, including Clearview, have doubled down lately on supplying services to their law enforcement clients.
Before Amazon reversed course, the company’s Rekognition technology allowed law enforcement authorities to conduct real-time facial identification on video feeds from cameras that might be distributed throughout a city. And other surveillance tools remain widespread. Ring Inc., a division of Amazon that markets home security systems such as video doorbells, continues to expand its partnerships with law enforcement. Via its Neighbors app, Ring provides police the ability to request video recordings directly from Ring customers. While Amazon has not yet integrated facial recognition tools into Ring systems, the company has not ruled it out.
Even without facial recognition built into products like Ring, law enforcement authorities have the tools at their disposal to conduct facial recognition analysis on any footage they receive. Federal agencies such as the FBI and Immigration and Customs Enforcement not only can leverage federal photo databases—such as passport photos, visa application photos and federal mug shots—but also can search state-owned databases of driver’s license photos. This practice raises complex privacy questions: Should an application for a passport or a driver’s license constitute consent to have one’s face scanned to allow law enforcement to conduct facial identification? Under what circumstances?
In addition to privacy challenges, facial recognition technologies risk entrenching and exacerbating racial and gender biases. Research by Joy Buolamwini and Timnit Gebru in 2018 analyzed publicly available facial recognition systems and found that dark-skinned women were misidentified at higher rates than other groups. National Institute of Standards and Technology research arrived at similar findings. Follow-up research by Buolamwini and Deborah Raji in 2019 found that while commercial facial recognition technology had improved across the board, worrying racial and gender disparities remain.
In the high-stakes world of law enforcement, biased technology will lead to biased—and potentially deadly—outcomes. Built-in biases remain concerning when the technology is deployed in more benign contexts. Take the example of expedited airport screening. If the Transportation Security Administration implements a face scanner that performs worse on darker skin, this would likely lead to faster screening for light-skinned individuals while darker-skinned individuals would face longer waits as human officials review automated decisions.
Even with flawless technology, careless deployment of facial recognition could lead to unfair and discriminatory outcomes if systems fail to account for systemic bias. Given the over-policing of black and brown communities, existing facial databases commonly used by police—such as mugshot photos—likely include a disproportionate number of nonwhite individuals. Even if such tools can accurately identify a face captured on a security camera, we should not be comfortable with policing systems that are more likely to identify black suspects than white suspects.
More broadly, the combination of rapidly advancing facial recognition software and increasingly ubiquitous high-definition cameras in public places puts powerful tools for mass surveillance within arm’s reach of law enforcement. As we have seen in China, these tools can be used to carry out ongoing, real-time tracking of anyone—or entire populations—at all times. Such mass surveillance would undermine core democratic tenets: diminishing our anonymity in public, chilling freedoms of speech and heightening the risk associated with protesting publicly. Unchecked, this creates a powerful, turnkey mechanism for authoritarian governance. At present, we have nothing but norms of governance, corporate practice and the goodwill of law enforcement to prevent these tools from being abused in this way.
How is facial recognition technology regulated in the U.S.?
There is no federal law governing the use of facial recognition technology. Sectoral U.S. privacy rules (such as HIPAA in the health care sector or Gramm-Leach-Bliley in the financial services sector) may constrain certain applications of facial recognition in certain scenarios but likely have no bearing on law enforcement use of the technology. At the state and local levels, a smattering of measures protect a small fraction of Americans from certain abuses of facial recognition. Eight cities in California and Massachusetts have banned government use of facial recognition altogether, while Portland, Oregon, is considering going further by banning both public- and private-sector use of the technology. Three states have banned the deployment of facial recognition in police body cameras (even as at least one company is marketing its police body cameras on their ability to conduct live facial recognition).
The most comprehensive law disciplining governmental use of facial recognition technology was passed in Washington state earlier this year. While critiqued for not going far enough and catering to the interests of local champion Microsoft, the law puts some meaningful restrictions in place. It requires any government agency seeking to use facial recognition to announce its intent to do so, and to publish an “accountability report” that would detail how the technology would work, how it would be used and how the agency would prevent potential harms. While such transparency measures may seem weak, they help address the black box in which most police departments currently deploy this technology.
Washington state’s law requires human review of any decision made using facial recognition technology that would have legal effects or “similarly significant effects” on an individual. This could include decisions that affect an individual’s housing, education, employment or civil rights. The law also requires that agencies test the technology in “operational conditions” and provide an application programming interface (API) to facilitate third-party testing of the technology. Perhaps most importantly, the law disciplines the use of facial recognition technology for real-time surveillance. Authorities may conduct such live scanning only after obtaining a warrant, when trying to locate a missing person, or under “exigent circumstances,” a well-established concept under Washington state law. In addition, authorities cannot use facial recognition technology to record any exercise of First Amendment rights—such as at a protest or during religious observance.
While no other states have laws covering government use of facial recognition technology, several states have laws that discipline private use of such technology. Illinois boasts the strongest such law—the Biometric Information Privacy Act (BIPA). Passed in 2008, BIPA covers a range of biometric identifiers, including facial scans, iris scans, fingerprints, and voiceprints, and requires that private entities obtain informed consent before collecting biometric data and that they limit the sharing and use of such data. Texas and Washington state have similar laws, while the California Consumer Privacy Act (CCPA) treats biometric data as personal data and provides rights to citizens against certain private uses of that data. BIPA and CCPA also provide individuals the ability to seek damages for violations of the law. BIPA enabled a $550 million class-action settlement with Facebook earlier this year, arising from the company’s use of facial recognition in its photo-tagging software. Last month, the American Civil Liberties Union initiated a case under BIPA against Clearview AI for deploying its facial recognition technology on the faceprints of Illinoisans.
What is Europe’s approach to facial recognition technology?
As in some U.S. states, European privacy laws prohibit certain commercial uses of facial recognition technology. The General Data Protection Regulation (GDPR) categorizes biometric data as sensitive personal data and restricts how private actors may collect, share and use that data. This makes it likely illegal for a company such as Ring to install facial recognition technology on smart doorbells—if such devices captured the images of passers-by, it would constitute collection of their biometric data without their consent. This week, EU officials speculated that Clearview AI’s business model is likely also illegal for the same reason, although Swedish law enforcement authorities have reportedly used the software.
The European Union has separate laws governing personal data collection and processing by EU institutions, on the one hand, and the law enforcement authorities of EU member states, on the other. These rules do not prevent authorities from collecting and processing biometric data or operating facial recognition systems. Indeed, throughout Europe, law enforcement authorities have been experimenting with real-time surveillance using facial recognition. In Wales last year (pre-Brexit), a divisional court ruled that the use of facial recognition by the Cardiff police conformed with national legislation and was consistent with data protection legislation and the United Kingdom’s Human Rights Act. In London, Metropolitan police have recently enabled facial recognition on parts of the city’s widespread closed-circuit TV network.
European law does, however, provide for a meaningful level of transparency around law enforcement’s use of facial recognition. The case in Cardiff publicized a remarkable level of detail around police use of the technology. Elsewhere in Europe, facial recognition has been tested as a tool for mass surveillance, but in Hamburg, Berlin, and Nice, police ceased such testing after determining that the French and German legal systems did not provide a legal basis for use of the technology.
Late last year, a leaked draft of a European Commission white paper revealed that EU authorities were considering a five-year ban on the deployment of facial recognition technology. However, the final version of the white paper, released in February, omitted that restriction. It instead included only a brief discussion of the risks surrounding biometric technologies and committed to “launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards.”
What are the policy options, short of an all-out ban?
Europe’s retreat from a ban on facial recognition, and the difficulty of addressing these issues in the United States, arise from the same simple fact: Facial recognition technology can be a powerful force multiplier for law enforcement and can be applied to achieve laudable goals, such as finding missing or trafficked persons or apprehending violent criminals. But without checks on its use, this technology runs the risk of undermining core democratic values and diminishing human freedom.
Congress has begun to pay attention to these technologies, with hearings and bills addressing specific issues related to facial recognition introduced by members on both sides of the aisle. The broad police reform package passed by the House last week includes provisions that would limit the use of facial recognition in police body cameras. But more comprehensive legislation is necessary. The state-level and European measures described above can be a useful reference point for legislators and policymakers, as can the good work done on this topic by researchers and advocates such as the team at Georgetown Law’s Center on Privacy & Technology.
Here are some good starting points:
Transparency requirements can provide a first line of legal defense. Having insight into how and where facial recognition technology is deployed—particularly by law enforcement—would provide citizens and advocates a much-needed window into the use and abuse of the technology. The requirements included in Washington state’s law are a good start.
In the United States, a national privacy law could provide important limits on the abuse of facial recognition, especially by private actors. Such a law could limit the collection, sharing, and retention of biometric data and provide for the criminal punishment of the misuse of biometric data.
Improvements in the technology may diminish some concerns related to bias ingrained in facial recognition systems, but legal safeguards are necessary to eliminate those risks. As is required in Washington state, requiring that facial recognition tools provide public APIs to enable testing can help identify biases where they exist.
The creation of standards and certification mechanisms could help authorities identify effective and unbiased services.
As in Washington state’s law, where automated facial identification could create consequences for the lives of affected individuals, human review of even the most accurate technology is warranted.
Law enforcement authorities should be constrained in their ability to conduct facial recognition analysis. Photo databases should exclude mugshots of people not found guilty of a crime and should not include driver’s license photos without, at a minimum, the awareness of licensees that their photos may be used for this purpose.
Certain applications of facial recognition should be banned altogether, such as use of the technology to surveil people based on racial profiles or the use of facial recognition in police body cameras.
Preventing the abuse of facial recognition technologies in ways that could create mass surveillance requires measures that limit the ongoing, real-time use of such tools. Such measures, some of which were included in Washington state’s facial recognition law, should prohibit ongoing surveillance of any individual without a warrant.
Where law enforcement uses real-time facial recognition on a temporary basis to provide security around sensitive events, citizens should know when and where such surveillance is in effect. Such monitoring should be prohibited in First Amendment scenarios such as protests, religious observances and other protected gatherings.
A new report from the Algorithmic Justice League recommends the creation of a new federal agency, akin to the Food and Drug Administration, to take responsibility for evaluating the potential harms of various applications of facial recognition technology and apply the appropriate safeguards.
These are just some ideas for how to prevent the worst potential outcomes from the increasingly widespread use of facial recognition technology. As debate continues in the United States about how to regulate facial recognition, and as the EU carries forward its dialogue, there is opportunity for a conversation among countries that share democratic values about how to approach the many policy challenges associated with this technology. With scant rules in force, and the technology becoming more powerful every day, it is past time for policymakers to act.
No comments:
Post a Comment