Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

Seventeen civil rights groups last week launched a public advocacy campaign to ban the use of facial recognition technology from New York City residential buildings and public venues, including stores and stadiums.

The groups are urging city council members to pass resolutions that would prohibit “any place or provider of public accommodation” and any landlord from using biometric recognition technology.

In their campaign letter, the groups said:

“Biometric recognition technology, including facial recognition technology (FRT), is biased, error-prone, and harmful to marginalized communities. It has no place in businesses and residences in New York City.”

The “Ban the Scaninitiative is led by the Surveillance Technology Oversight Project, Amnesty International, New York Civil Liberties Union (NCLU), The Legal Aid Society, Surveillance Resistance Lab, Fight for the Future and the Center On Race, Inequality, and the Law at NYU School of Law.

As of today, 15 additional groups have joined as campaign signatories.

The campaign comes on the heels of New York City Mayor Eric Adams’ Oct. 16 announcement of a citywide plan to embrace “responsible” use of artificial intelligence (AI) tools.

The groups said the banning of biometric recognition — including facial recognition — is “an essential first step toward any ‘responsible’ AI plan in New York City.”

According to privacy experts, including Greg Glaser, a digital privacy expert and lawyer litigating cases for Children’s Health Defense’s (CHD) Electromagnetic Radiation & Wireless program, facial recognition technology carries at least four risks.

Glaser told The Defender that many people have a “natural aversion” to having their faces systematically scanned by AI software that analyzes the data in sometimes “undisclosed ways.”

Glaser and other experts offered these four warnings:

1. Facial recognition is ‘racially biased’ and ‘outright dangerous’

“At least in the early stages of this AI facial recognition technology the risk of misidentification will be significant, especially because of the vast quantity of data that can now be analyzed,” Glaser said. “That is why privacy organizations are especially concerned with whether facial scans misidentify people, including based on factors such as race.”

Daniel Schwarz, NCLU’s senior privacy and technology strategist, said, “Facial recognition tech is risky, unreliable, and racially biased — and in the hands of law enforcement, it’s outright dangerous.”

“We’ve seen the harms of facial recognition technology amplifying racist policing and leading to wrongful arrests,” Schwartz said.

Kashmir Hill, a privacy and technology reporter for The New York Times who lives in New York, recently told NPR that “of the handful of people we know to have been wrongfully arrested for the crime of looking like someone else, in every case, the person has been Black.”

Schwartz added:

“Whether in their homes, patronizing local businesses, or accessing health care, New Yorkers don’t want live in a world where biometric surveillance constantly monitors their every move — we’re proud to fight alongside our coalition partners so New York City bans the scan once and for all.”

2. Facial recognition allows companies to turn people away based on their place of employment 

Hill in a May 10 article in the Times recounted her experience of taking Tia Garcia, a personal injury lawyer, to Madison Square Garden to see if Garcia “could get into the building.”

Madison Square Garden is called “the world’s most famous arena.” Its owner, James Dolan — who also owns Radio City Music Hall and the Beacon Theatre — decided he “wanted to use the technology to keep out his enemies, namely lawyers who worked for firms that had sued him,” Hill explained.

Madison Square Garden scraped the lawyers’ photos and created a face ban “so that when they tried to go to a Knicks game or Rangers game or a Mariah Carey concert, they get turned away at the door. They’re told, ‘Sorry, you’re not welcome here until you drop your suit against us.’”

Garcia is “one of thousands of lawyers” on the ban list, Hill said. “While we were in line, facial recognition technology identified her.”

Garcia was denied access by management and forced to leave — even though she was not directly involved in her firm’s lawsuit against the arena.

“It’s a really incredible deployment of this technology and shows how chilling the uses could be,” Hill said, “that you might be, you know, turned away from a company because of where you work.”

Hill added, “I could imagine a future in which a company turns you away because you wrote a bad Yelp review or they don’t like your political leanings.”

3. Your biometric data may be sold or hacked

Hill’s new book, “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It” is about a Clearview AI, a startup that sells facial recognition software and access to its extensive biometric database to thousands of U.S. police departments.

Clearview AI also holds contracts with the U.S. Department of Homeland Security (DHS) and the FBI and has received funding from both the U.S. Army and Air Force, Hill said.

“Clearview AI has agreed not to sell its database to companies and to only sell it to police agencies. But there are other facial recognition technologies out there.” Hill said, referring to how Madison Square Garden was able to work with another facial recognition tech provider to mount its face ban.

Glaser said many concerned civil rights groups want to know if “outside audits will be conducted in New York to ensure personally identifiable information is destroyed rather than retained indefinitely.”

“This is because,” he said, “once the data is captured in an undisclosed database, it can be utilized for other government and commercial purposes, and it can be hacked.”

Glaser pointed out a DHS database with travelers’ biometric data was hacked. “There is a black market for profile information, and those markets will grow as AI grows because the number of potential illicit uses expands.”

Hill said that in Europe, citizens can request to have their biometric information (i.e., photos) removed from databases such as the kind used by Clearview AI, but in the U.S. most states do not have such digital privacy laws.

Only four states — California, Colorado, Virginia and Connecticut — provide such digital privacy protections, Hill said. Illinois now also has a special law that allows citizens to have their facial information removed from digital registries.

4. Facial recognition tech is a step toward a policing ‘dystopia’

According to the description of Hill’s new book, facial recognition technology has been “quietly growing more powerful for decades. … Unregulated, it could expand the reach of policing, as it has in China and Russia, to a terrifying, dystopian level.”

Glaser agreed.

He called the Transportation Security Administration’s plan to pilot facial recognition technology in 25 airports across the U.S. “dystopian.”

Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, in December 2022, told The Washington Post something similar:

“What we often see with these biometric programs is they are only optional in the introductory phases — and over time we see them becoming standardized and nationalized and eventually compulsory …

“There is no place more coercive to ask people for their consent than an airport.”

Glaser pointed out that adopting such technologies puts communities, such as New York, and governments on the “slippery slope” toward embracing a transhumanist philosophy that values a person’s digital ID just as much, if not more, than their physical presence.

In such a world, he said:

“Eventually your biometric ID becomes so advanced and integrated into your accounts, that society will begin to recognize your biometric ID as the superior you, meaning that you are less real than the computer version of you.”