If you are reading this, even if you live outside of Lee’s Summit, the Lee’s Summit Police Department has likely had access to your facial data, and with the click of a button could exploit your information including every picture containing you that’s ever been posted on the internet. Even if the picture was deleted long ago, the LSPD still had access to them.
If that sounds like a nightmarish, dystopian, totalitarian-like “Big Brother” scenario, that’s because it is.
The technology being referred to is an infamous facial recognition software developed by a company called Clearview AI.
Numerous countries such as Canada have banned the technology and ordered for the software company to immediately cease collecting images of Canadians, and to delete all of the previously collected images and biometrics of Canadians.
While the LSPD is one of the few local departments that has admitted to utilizing this technology at the organizational level, other departments like KCPD have admitted to individual officers having used the technology.
Here’s what a few AI experts have had to say about Clearview and similar AI technologies;
“One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse,” Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. “Government agencies including the F.B.I., Customs and Border Protection and local law enforcement must immediately halt the deployment of this dystopian technology.”
Joy Buolamwini, a renowned Black woman AI researcher at the M.I.T. Media Lab, who has led a number of ground-breaking facial recognition studies, declared “We must safeguard the public interest and halt the proliferation of face surveillance.”
“It’s not difficult to imagine terrifying uses of this, and we already have a hint, given how associates of Clearview AI have already made use of the tool.” – (Recode) Rebecca Heilweil
“The new federal study found that the kind of facial matching algorithms used in law enforcement had the highest error rates for African-American females.” (NYT) – Natasha Singer & Sade Metz
By high error rates, they are likely referring to this study which found that similar AI technologies can misidentify Black faces anywhere between 10x-100x more frequently than white faces.
So what was the Lee’s Summit Police Department doing using the technology in the first place and are they still using it?
Our latest knowledge on the issue comes from a statement they made to KCTV where they admitted to having used it for a minimum of at least 3 months. They have made no public statements indicating they have halted the use of the technology.
The fact that this software is being utilized in departments and agencies across the country is a clear sign that policing as an institution and social function is far more interested in spying and surveillance, than it is with community and safety.
It is almost laughable that a bunch of old white men sat in a room and legitimately determined it is a better way to achieve community safety by spying on literally every single person in the city, rather than simply investing in education, mental health and other services that actually reduce “crime” to begin with.
This piece is Part 1 in our series: The Future of Racism
We are a startup Black-owned news organization dedicated to holding the powerful accountable through unapologetic, truth-telling. We have sharp and in-depth analysis on race, policing, housing, the environment, technology, local media propaganda and more. We are currently volunteer-based and depend on reader support to operate.
Support The Kansas City Defender’s independent media here, or Cashapp us at $KCDefender.
Enjoy our content? Make sure to subscribe to our email list!