What If You Were Arrested By Mistake? The Perils of Facial Recognition
Facial recognition technology is probably the most powerful surveillance tool ever invented. It is already used by law enforcement, border police, airports, and smartphone users.
It is a type of biometric technology that works in three steps. A face needs to be detected, vectorized, and transformed into digital data - all with the help of trained algorithms that automatize the process. After that, it can be matched with other vectorized images from other databases and help i.e., identify or authenticate a person.
Most tech giants - i.e., Google, Apple, Facebook, Amazon, and Microsoft - are currently doing research and developing facial recognition technologies, as the market is projected to reach $16.74 billion by 2030.
One of the first companies to advance facial recognition technology was Facebook, in 2014, with Facedeep. Their method could correctly identify if two pictures belonged to the same person 97.35% of the time - only 0.15% worse than a human doing the same task. You can read Facebook's research paper about the topic.
In 2015, researchers from Google published the paper "FaceNet: A Unified Embedding for Face Recognition and Clustering." FaceNet generates high-quality face mapping from the images using deep learning architectures. It achieved a new record accuracy of 99.63%.
From 2015 until today, facial recognition has been growing steadily, with every large tech company presenting a new product now and then. However, such a powerful and invasive surveillance tool does not come without flaws:
The New York Times reported that a New Jersey man, accused of shoplifting and trying to hit an officer with a car, was the third known black man to be wrongfully arrested based on face recognition.
Various studies, such as the one called "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," have evaluated facial analysis software and found much higher rates of misclassification when the algorithms were applied to females with darker skin.
An independent assessment by the US National Institute of Standards and Technology (NIST) has confirmed these studies, finding that face recognition technologies across 189 algorithms are the least accurate on women of color.
The American Civil Liberties Union (ACLU) recently conducted a test of the facial recognition tool Rekognition, developed by Amazon. The software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.
As a technology, facial recognition can have multiple uses, and some of them are definitely life-saving, such as finding missing or disoriented people, or identifying exploited children. However, these are not necessarily the most profitable uses, and technology implementation and spread will certainly follow the money.
Are there transparency standards to ensure that there is accountability oversight over these tools? Are people aware of where, when, and how these surveillance systems are being implemented? Or we are just welcoming the Orwellian State with open arms?
I love technology and innovation - it shows the power of human creativity, it is limitless, and it can save lives and improve society.
My point with this article is that facial recognition operates on a slippery slope. It is invasive, as it normalizes constant identification and authentication by governments and third parties. It can easily be used in totalitarian ways to control people's whereabouts, preferences, and thoughts. It can easily be incorporated into tools that infringe privacy and fundamental rights.
When talking about facial recognition, we need constant oversight and risk mitigation measures to guarantee that people are being respected, heard, and allowed to intervene. Data protection rights, such as those expressed in the GDPR, are more important than ever: the right of access, rectification, erasure, restriction of processing, and data portability. The right to object and not to be subject to a decision based solely on automated processing, including profiling.
Europe is already taking a more strict instance on facial recognition technologies. The United Kingdom's Information Commissioner’s Office (ICO) has fined Clearview AI Inc £7,55M for using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition. In the United States, many communities have already banned the use of facial recognition. Microsoft is also limiting access to its facial recognition tools, due to ethical concerns.
These are all positive developments, which aim at leaving the hype aside and focusing on more constitutional aspects, such as its impact on people's privacy, freedom, and dignity.
Imagine if a facial recognition system wrongfully identified you as the suspect of a crime, and the criminal system of your country took that data as the truth and arrested you. "No questions asked, as algorithms do not lie." This would be the perfect nightmarish marriage between Orwell and Kafka, which none of us wants to see happening.
The only antidote to that is the notion that the protection of fundamental rights is non-negotiable.
*
See you next week. All the best, Luiza Jarovsky