Should Facial Recognition Be Used in Policing?

In January 2020, The New York Times broke the story and raised public awareness of a little-known facial recognition startup called Clearview AI, which sells their controversial technology to more than 600 law enforcement agencies and police units around the nation. CEO Hon Ton-That says the benefits of the technology include allowing police to identify criminals quickly and efficiently, often from a single photo uploaded to the service.

Facial recognition technology is already used in varying capacities throughout our nation. With an estimated 18,000 U.S. departments currently using the tool, many officers claim they have been able to break open cold cases, ranging from child sexual abuse to assault, as well as stop criminals from leaving the country illegally. There are countless beneficial uses of this technology, but also many unknowns and concerns over issues like privacy and bias. Used with wisdom and abundant transparency, this type of technology might indeed revolutionize safety in our communities. But adopted too quickly, or without adequate understanding of its various possibilities, it might also lead to dangerous or even deadly injustices.

Growing Reservations

When you hear “facial recognition technology,” perhaps you think of Face ID on the iPhone or even Clear technology used in major airports. But many people fear facial recognition technology because of the potential for privacy invasion and a growing Big Brother surveillance state. Who is watching us, what do they know, and is my data safe? But another major ethical issue with this technology is the potential of racial bias or profiling, which may lead to additional racial profiling and harm to our brothers and sisters of color.

Facial recognition systems are built on various pieces of image data fed into an artificial intelligence (AI) system that can help identify potential leads and photo matches with varying levels of accuracy. But if there’s any problem with the quality of data, the number of inputs, or even how the system is used as it continues to grow in accuracy, it may yield false positives that could have devastating consequences in high-stakes situations.