Facial recognition technology riddled with racial bias; cities are fighting back
A mobile police facial recognition facility outside a shopping centre in London, Feb. 11, 2020. | Kelvin Chan / AP

In the wake of the George Floyd protests, many cities and technology conglomerates are banning police agencies from utilizing facial recognition technology, citing racial discrimination as a vital concern. While many believe these preventative measures need to be taken a step further, corporations such as IBM, Microsoft, and Amazon have restricted or at least halted the distribution of these problematic technologies to police departments as of the second week of June.

The pressure had an Amazon representative saying the company advocated that governments “put in place stronger regulations to govern the ethical use of facial recognition technology.” The tech giant said it hoped a one-year moratorium on the technology’s use by police “might give Congress enough time to implement appropriate rules.”

Before the COVID-19 pandemic and the international protests against Floyd’s killing though, facial recognition technology was already in hot water over concerns related to race. When Apple released its FaceID software embedded within the iPhone X in 2017, Mirror magazine reported that the algorithm could not differentiate facial features in Chinese users.

Last year, 18-year-old Ousmane Bah was falsely arrested for a string of Apple store robberies in Massachusetts, New York, New Jersey, and Delaware after the company’s facial recognition technology mistakenly identified him as the perpetrator. The Washington Post reported that an NYPD detective eventually determined the real culprit actually had no facial similarities to Bah whatsoever.

Now, as the world masks up to reduce the transmission of coronavirus, facial recognition algorithms that were developed prior to the pandemic are experiencing further problems in accuracy. The National Institute of Standards and Technology reported that top-tier algorithms are experiencing error rates between 5% and 50%. The NIST quoted computer scientist Mei Ngan on her proposal for dealing with these shortcomings:

“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces. We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”

As a result of the technology’s vulnerability to racial discrimination discovered by the NIST, cities around the United States have begun outlawing the use of facial recognition technology, or FRT, within their government agencies. In mid-May, San Francisco became the first and largest American city to implement the ban in an 8-to-1 vote aimed at combatting police bias and widespread government surveillance. In the following months, Oakland city councillors unanimously voted to prohibit their police force from using the software.

On the East Coast, five cities in Massachusetts followed California’s example: Boston became the fifth city in the state—after Somerville, Brookline, Northampton, and Cambridge—and the second largest U.S. city to prohibit FRT, citing the NIST federal study that outlines the algorithm’s flaw in misidentifying people of color. In addition, city councillors incorporated a ban on outsourcing facial recognition tactics to third-party corporations.

After numerous attempts to delay the momentum of the proposed ordinance, city councillors in Portland, Maine, voted in yet another unanimous decision to ban the use of FRT. The proposal, formulated by Black Lives Matter activists, was commended by the American Civil Liberties Union in a tweet on Aug. 3, which said, “Our movement to stop this invasive technology is only growing.”

Internationally, the Court of Appeal in the United Kingdom ruled on Aug. 11 that FRT possesses “fundamental deficiencies” in the way it has been deployed—primarily within the South Wales Police Department, where the testing of FRT within the U.K. is being spearheaded.

However, South Wales Police deputy chief constable Jeremy Vaughn believes this will not be the death of facial recognition tech in the U.K.: “There is nothing in the Court of Appeal judgment that fundamentally undermines the use of facial recognition to protect the public.”

Londoners fear the widespread implementation of the software across the city’s network of over 627,000 CCTV cameras: a rate of 67.47 cameras per 1,000 citizens. This abundance of surveillance makes London is the most surveilled city in the world outside of China.

Apart from the racial discrimination and accuracy flaws that the technology exhibits, the extensive databases that house user data are vulnerable to breaches. In February, controversial artificial intelligence company Clearview AI announced that an outside hacker “gained unauthorized access” to its database, comprised of three billion photographs scraped from various social media platforms. Clearview’s attorney, Tor Ekeland, gave a statement to The Daily Beast regarding the extent of the data breach that basically told people to get used to it: “Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw and continue to work to strengthen our security.”

Clearview AI has received a laundry list of cease-and-desist letters from social media websites, such as Google, Facebook, and YouTube, the latter of which publicly reprimanded their data gathering tactics in a statement from spokesmen Alex Joseph. “YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person”, Joseph said. “Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter.”

While the fight against this invasion of privacy rages on, the FBI continues to populate its database with drivers’ license photos from DMV offices in Texas, Florida, and Illinois. Maryland is leading the country with the most intrusive facial recognition system: Undocumented individuals are allowed possession of driver’s licenses, which can then be handed over directly to the FBI or ICE for deportation in a classic bait-and-switch. In 2016, activists at the Freddie Gray protests were identified via the use of FRT and arrested on unrelated changes; a clear sign of disheartening political involvement if it doesn’t match the political status quo.

This isn’t a story that’s going to disappear anytime soon.


Cameron Hughey
Cameron Hughey

Cameron Hughey is a writer and college student from the southern New Hampshire area. He is a member of the Boston chapter of Democratic Socialists of America and writes on a wide array of topics, including technology, social reform, healthcare, global warming, and music.