When you think about it, facial recognition is a deeply “human” action. It’s the most common way for people to recognize one another, it’s one of the earliest stages of our developmental attachment to our parents, and it is, by far, the easiest way to evaluate someone’s credibility, intentions, and personality. It explains why we want face to face meetings for our big decisions (has anyone ever proposed via text?), and why the Constitution requires that we see one another in court sometimes. We even show our faces to one another when there’s no need: your social media profile almost certainly has a picture of your face somewhere, largely because it’s a way for people to know your account is really “you,” and not an evil impostor.
It’s that very closeness to human nature that makes automated facial recognition both readily understood and deeply alienating. The fact is that, unlike many other complicated data processing or machine learning actions, recognizing a human face is something that we all basically understand and that resonates with us in a way that other forms of recognition do not. Think of it this way: fingerprints are more reliable than facial recognition (just ask Chinese authorities), but people are more comfortable with fingerprint ID over facial ID, and by a wide margin. Why? Plenty of reasons, but one is certainly the feeling that our faces are somehow more intrinsically “us,” and so deserving of more protection (regardless of whether that makes sense from a privacy perspective). Facial recognition technology (FRT) implicates our very notion of ourselves, and when/how it is acceptable to be seen.
At this point, you’re likely thinking to yourself, “Hmm….a discussion of digital identity, risks of a data processing and identification tool, dangerous implications for personal autonomy – I wonder if there’s a specific company he’s going to discuss…”
Your Face (Book)
You may have heard that Facebook uses facial recognition technology for all kinds of purposes, including identifying individuals in photos who haven’t been tagged or who have removed their tag. You know, because nothing says “please secretly figure out who I am” like untagging yourself from a photo. Nevertheless, the scope of the technology has always been shrouded in a degree of mystery — when and how does Facebook deploy facial recognition, and why? As it happens, we still don’t have a clear understanding of the extent to which Facebook scans and processes faces (which is a huge problem under GDPR), but they did provide an outline for how users can deactivate FRT, which is a good thing.
Except, when Consumer Reports went through 31 Facebook profiles to change the settings, the ability to deactivate FRT was missing for 8 of them. Let’s be clear here – it wasn’t that CR said that the deactivation toggle didn’t work when switched to “off,” or that it wasn’t as comprehensive as they liked, it was that, for more than a quarter of the accounts they examined, the advertised ability to turn off facial recognition simply didn’t exist. That is much, much more problematic than a diminished functionality or unclear settings, because it means that the actual code underlying those accounts materially differed from all other accounts.
Facebook’s situation can be resolved, and presumably will be, with some relatively quick fixes, although the regulatory fallout from this investigation remains to be seen (especially if some of the accounts that lacked the ability to turn FRT off were for users in the EU). But Facebook’s broader problem here is that it has deployed a tool that fits into a very narrow band: it is both technologically advanced and powerful and understandable (at a basic level) to the average person, who tends not to like it very much. In other words, FRT is a technology just waiting for a PR disaster to incite a backlash, and Facebook may have just stumbled into one.
On the Street Where You Live
Of course, Facebook isn’t the only one facing a backlash about FRT. As facial scanning devices become more commonplace throughout the United States and Europe, advocacy groups and privacy-minded citizens are voicing grave concerns. One reason for their concern is that the proliferation of FRT throughout cities will lead to an even more pervasive, consistent surveillance state. As many cities in the US (like Chicago or Detroit) move towards ubiquitous, real-time monitoring, these concerns are real.
These concerns prompted the City Council of San Francisco to take the extraordinary step of banning facial recognition technology usage by the policy and city agencies. This decision, prompted in part by reports of the dangers of facial scanning deployed on a mass scale, as in China, leaves San Francisco in a unique position: it now can/must demonstrate whether the absence of FRT as a tool for law enforcement and city officials results in a meaningful decline in their ability to conduct their affairs. Those who want to restore FRT as a tool will, no doubt, lobby hard for the idea that any increase in crime or decrease in efficiency is a direct tie back to the ban, but the quality of those arguments remains to be seen.
The concerns go beyond San Francisco. Congress has convened a number of hearings on FRT and its appropriate uses, particularly as the relationship between monitoring and Fourth Amendment protections comes under more scrutiny. The central question is not whether legislators will ban FRT (they won’t) or whether police and other agencies will stop wanting to use it (they won’t), but whether privacy advocates and government can agree on an appropriate balance between the two. That isn’t an easy balance to strike: consider London, which is both subject to the GDPR and the second-most surveilled city on earth (after Beijing). The right outcome is finding value criteria for both the responsible use and reasonable limitations on FRT. If that’s not the goal, we’ll find ourselves facing an increasingly dehumanized approach to an intrinsically human act.