A friend recently tagged me in a Facebook post with a comment that I looked like someone in the photo. That someone was Ira Glass, host of ‘This American Life.’ I’m a big fan of Ira, but never thought we looked alike, at least not until seeing this photo. There was indeed a resemblance and it made me smile.
Some find it flattering when someone says they look like a famous individual. But what if that someone isn’t a person? What if it’s an artificial intelligence (AI) algorithm that doesn’t just think you resemble someone, but that you ARE someone else? This is exactly what happened in early 2019 to our very own Fred Cook, Director of the USC Center for Public Relations.
A few weeks after the COVID-19 tidal wave washed over the United States, Facebook suggested Fred tag himself in a photo, only the photo was not of him but of Andrew Cuomo, the Governor of New York. Fred and Governor Cuomo do share similar facial features, but Facebook’s algorithm believed there was a close enough match to suggest Governor Cuomo was Fred. This error was relatively benign, and no doubt made for some interesting conversations, but it also highlighted a flaw in one of the fastest-growing applications of AI: facial recognition.
AI-powered facial recognition has exploded over the past few years across myriad industries, government institutions and applications. An explosion that’s been met with excitement, fear and outright rejection.
Proponents of facial recognition laud the technology for enabling convenience and security. Google and Facebook use facial recognition to help us easily tag friends and family in photos so we can organize our memories. Companies like Apple and Google offer facial recognition as the primary way for customers to unlock their mobile phones.
Facial recognition promises to make our streets safer and to make travel a more seamless and secure experience. Airlines like JetBlue employ it at check-in – no physical ID, no problem. The U.S. Department of Homeland Security predicts facial recognition will be ubiquitous at airport security checkpoints with 97% of travelers being screened this way by 2023. As a means of contactless verification, facial recognition will replace fingerprint recognition and other forms of physical ID in our post-pandemic, socially distanced future.
The lion’s share of facial recognition coverage in the media this past year was critical of the technology. Threats to privacy and human rights, fears of unlawful police and government surveillance and inherent racial bias topped the list of concerns.
January 2020 marked the first reported wrongful arrest using facial recognition. The Detroit Police Department wrongfully arrested a Black man named Robert Williams based solely on a false positive with no human oversight. The New York Police Department (NYPD) recently defended its use in identifying a Black Lives Matter activist accused of assaulting an officer. At a time when law enforcement agencies were being accused of racial profiling and using excessive force against Black people, criticism was harsh. NYPD investigators mitigated risks of wrongful arrest by using their judgment vs. relying on the technology alone.
Facial recognition is accused of racial bias by both the American Civil Liberties Union and the National Institute for Standards in Technology (NIST), citing high error rates when identifying Black faces. A NIST study from December 2019 found facial recognition systems to be 10 to 100 times more likely to generate false positives for Asian and Black faces when compared to white men.
It’s unclear how many law enforcement agencies use facial recognition, but one company, Clearview AI, claims to have contracts with 600 U.S. law enforcement agencies. In the wake of mass protests resulting from the killing of George Floyd, claims of racism in law enforcement, coupled with irresponsible use of facial recognition, have spurred calls for police reform and laws governing the technology. Amazon and Microsoft have already responded by publicly announcing they will stop supplying facial recognition technologies to law enforcement and IBM is getting out of the business altogether.
Despite the downsides, facial recognition is the future. It’s here to stay. Like other game-changing technologies that came before it. It holds much promise but a few things need to happen for it to become commonplace and accepted.
- Answer calls for national laws governing all aspects of facial recognition development and usage to ensure racial and other biases are reduced, accuracy is increased and privacy is preserved — all with complete transparency.
- Facial recognition is a tool, not a complete solution. The collaboration between the human decision-maker and the AI technology that powers it will be the key to safe and effective use.
- Even with stricter regulations and transparency, there must be trust in the institutions that deploy the technology.
Facial recognition is a powerful tool with the potential to enable very good and equally dangerous outcomes. Future acceptance will be a direct reflection of our trust in the government institutions and companies that put it to use. In what is arguably one of the most politically and emotionally charged periods of our lifetime, the negatives are in the spotlight and an absence of trust has created a void for fear and skepticism to fill.
The tide will eventually turn.
2021 Relevance Report Results