More Cities Are Banning Facial Recognition Technology

Facial recognition technology has advanced rapidly in the past decade, but so has the backlash against it. A growing coalition of cities, states, and countries are restricting or outright banning the use of facial recognition by law enforcement and government agencies, arguing that the technology poses unacceptable risks to privacy, civil liberties, and racial equity.
The Ban Movement Expands
San Francisco became the first major U.S. city to ban government use of facial recognition in 2019. Since then, more than 20 American cities have enacted similar restrictions, including Boston, Minneapolis, Portland, and New Orleans. In 2025, the movement gained significant momentum when both Chicago and Philadelphia passed comprehensive bans on real-time facial recognition surveillance.
The European Union's AI Act, which took partial effect in 2025, includes a broad prohibition on real-time biometric identification in public spaces, with narrow exceptions for law enforcement investigating serious crimes. Several EU member states have gone further. Belgium and Luxembourg have banned all government use of facial recognition, and Germany is considering similar legislation.
Outside the West, the picture is more complex. Countries like China, Russia, and the United Arab Emirates continue to expand facial recognition surveillance. India, despite concerns from civil liberties groups, has deployed the technology widely in its railways and airports.
Why Cities Are Saying No
The arguments against facial recognition center on three main concerns. The first is accuracy, or rather the lack of it. Multiple studies have shown that facial recognition algorithms perform significantly worse on people with darker skin tones, women, and older adults. A landmark 2019 study by the National Institute of Standards and Technology found that many commercial algorithms were 10 to 100 times more likely to misidentify Black and Asian faces compared to white faces.
These error rates have real consequences. Robert Williams, a Black man in Detroit, was wrongfully arrested in 2020 after a facial recognition system matched his driver's license photo to surveillance footage of a shoplifter. Similar cases have been documented in New Jersey, Louisiana, and Georgia. In each case, the person wrongly identified was Black.
The second concern is privacy. Facial recognition enables mass surveillance at a scale that was previously impossible. A camera equipped with facial recognition can scan thousands of faces per minute, identifying individuals without their knowledge or consent. Civil liberties organizations argue that this capability fundamentally alters the relationship between citizens and the state.
The third concern involves the chilling effect on free expression and assembly. When people know their faces are being scanned at protests, public gatherings, or places of worship, they may choose not to attend. Research from Georgetown University's Center on Privacy and Technology found that facial recognition deployment near protest sites measurably reduced attendance at public demonstrations.
The Law Enforcement Perspective
Police departments and security agencies push back against bans, arguing that facial recognition is a valuable tool for solving crimes, finding missing persons, and preventing terrorist attacks. The technology has been used to identify suspects in cases ranging from child exploitation to mass shootings.
Some law enforcement leaders advocate for regulation rather than prohibition. They propose requirements such as minimum accuracy thresholds, mandatory human review before any action is taken based on a facial recognition match, and prohibitions on use in certain contexts like political protests.
The International Association of Chiefs of Police has called for a national framework that allows regulated use of the technology while addressing bias and accountability concerns. Without such a framework, they argue, the patchwork of local bans creates confusion and hampers investigations that cross jurisdictional boundaries.
The Private Sector Dimension
Government use is only part of the story. Private companies deploy facial recognition in retail stores, apartment buildings, concert venues, and workplaces. The legal landscape for private-sector use is even more fragmented than for government use.
Illinois's Biometric Information Privacy Act, passed in 2008, remains the strongest state law governing private facial recognition use. It requires informed consent before collecting biometric data and provides a private right of action, meaning individuals can sue for violations. The law has generated billions of dollars in settlements, including a $650 million settlement with Facebook in 2021.
Several states have introduced similar legislation, but the tech industry has lobbied aggressively against biometric privacy laws, arguing that they stifle innovation and create legal uncertainty.
The Path Forward
The debate over facial recognition is ultimately a debate about what kind of surveillance a democratic society is willing to accept. Proponents see a technology that can make communities safer. Opponents see a tool of mass surveillance that is disproportionately deployed against marginalized communities.
As the technology continues to improve and become cheaper, the pressure to deploy it will only increase. Whether the ban movement can sustain its momentum, or whether regulated use becomes the norm, will depend on how effectively advocates make the case that some technologies are too dangerous to deploy without strict limits.


