- Facial recognition technology has been on the rise due to security and marketing demands.
- The technology stores your “face print” and you have no control over that.
- Is a bit more convenience worth undermining our intimate information?
First things first: No, I’m not typing this on a burner laptop inside an underground dungeon while wearing a tinfoil hat. But hear me out. It could be wise not to bombard our social media accounts with pictures of our daily activities, especially ones with our faces in them, because guess what: the government and the big corporations are keeping an eye on us…
It’s likely that in the past couple of weeks you might’ve come across the latest tech controversy that disrupted Silicon Valley and Washington D.C. alike. I’m talking about the facial recognition technology startup, Clearview AI, and the uncovering of its enormous database that contains more than three billion pictures scraped from all over the internet, according to its CEO, Hoan Ton-That.
Here’s the gist of the controversy: Clearview AI claims its purpose is to help law enforcement identify unknown faces by matching these faces with their personal online images using the help of their facial recognition software, which feeds into their absurdly huge database. Major red flag alert! The startup claims that there are approximately 1,000 agencies across the United States and Canada that buy or request the scraped information Clearview AI collects, sometimes without any official approval from their respective cities or states, and that there is practically nothing to be done about their practices because no one can control what they do with the collected data.
Another major red flag alert!
This controversy is undoubtedly disturbing, but Clearview AI isn’t the only platform utilizing facial recognition technology for ominous purposes. So, in order to better understand the situation, we must first go through a facial recognition technology 101 crash course.
Essentially, facial recognition system is a technology capable of identifying a person from a digital source, whether from images or videos. The technology is able to detect and analyze the facial features of an individual (e.g. the distance between both eyes, the width of the nose, etc.) creating a “facial print” that could then be matched with images and videos online to identify that exact individual. Primitive facial recognition technologies create face prints through cameras’ 2D scanning, while more advanced ones, like Apple’s Face ID, use tens of thousands of infrared dots, or 3D scanning through invisible light, to create more accurate face prints.
The revolutionary technology is considered to be the most natural of all biometric measurements, compared to measurements such as finger prints and iris recognition. And for that, it’s being gradually integrated into airports to be used for boarding passengers, phones for security and encryption purposes , and in China, authorities are determined to make everything, from boarding an airplane to ordering a KFC bucket, based on facial recognition.
And that’s another major red flag!
“It’s an issue of how much privacy are we going to sacrifice in the name of a bit more convenience,” said Dr. Anto Mohsin, assistant professor in science and technology studies at Northwestern University in Qatar, “I don’t mind spending one more second typing my password on my phone if it means my face isn’t stored in a database somewhere.”
The technology sounds very convenient, but also clearly invasive. Imagine that at the peak of the use of facial recognition technology, your face is constantly being scanned wherever you go; the bank, the mall, the grocery store, etc. And this isn’t like having your face on CCTVs, because they don’t create and store personal face prints that arguably violate a very personal detail about you which is your very own face.
Take it one step further and imagine that you are participating in a protest or a public demonstration, and your face is being detected and stored by parties you are unaware of. Such an eerie thought is enough to deter you from practicing your right to express yourself, and that could ultimately undermine the essence of any democratic society.
“I don’t want the government to know, or have it stored in a database that I went at this time to this place,” said Anthony Wallace, network engineer a computing security analyst at Northwestern University in Qatar, “I don’t have anything to hide, but that’s the argument you hear when it comes to a multitude of security related technologies.”
Here is the thing, deploying such penetrative surveillance technologies at that micro level, specifically facial recognition technology, obstructs our ability to stay anonymous. It becomes that you practically can’t do anything without someone, somewhere, harvesting your personal information. And while that might be less concerning under friendly governments, it becomes alarming under authoritarian governments, according to Wallace.
Take China for example. We talked about how the autocratic republic aims to integrate facial recognition technology into practically everything in the country, but considering their authoritarian reality, disturbing intentions and activities cannot be dismissed. For instance, China is already using the technology to ID and publicly shame jaywalkers by posting their personal information on public screens. And on a more sinister note, the technology is being used to track and control the Uyghurs, a largely Muslim minority in the Xinjiang autonomous region, in the first known example of a government intentionally using A.I. for racial profiling.
But at least governments can be held responsible for their actions, at least to some extent. Private companies on the other hand, are practically free as a bird. Companies like Google and Facebook are two of the world’s leading investors in facial recognition technology, but Facebook, for instance, is one of its most recognized abusers. Less than a month ago, Facebook agreed to a $550 million settlement following a class action suit that claimed that Facebook’s photo-tagging feature failed to comply to Illinois’ 2008 Biometric Information Privacy Act which states that permission is required before collecting biometric data.
However, as Hoan Ton-That said, once a company collects the data, anything that follows becomes trivial. Paying a fine, or discontinuing a software, does not insure that the already collected data is going to be deleted and not used for other purposes. Then again, how can we blame these companies for abusing our personal information if we actively feed them such information knowing that they profit astronomic figures from selling them to advertisers and other third parties. Not only that, but let me ask you a pretty embarrassing questions: have you ever read any of the fine prints of these companies’ terms and conditions before cluelessly agreeing to them?
Here is just a final example of how creepy – possibly sinister – facial recognition A.I. could be employed:
“They say that the road to hell is paved with good intentions,” said Wallace, “and we can end up going down these bad paths with really great intentions if we don’t understand the implications that we have for the future.”