Lincoln Center for Applied Ethics directors weigh in on Facebook’s move to halt facial recognition
Earlier this month, Facebook announced that it will soon shut down facial recognition software on its platforms — effectively removing the facial recognition templates of more than a billion Facebook users. The software has been utilized by the social media service for over a decade, allowing people to be automatically identified in photos and videos.
The move to shutter facial recognition on Facebook comes at a time when use of the technology has become exceedingly controversial. Experts have cited privacy concerns when it comes to increased surveillance, underscoring the need for robust laws and regulation. The American Civil Liberties Union (ACLU) has called facial recognition “an unprecedented threat to our privacy and civil liberties.”
The Lincoln Center for Applied Ethics at Arizona State University critically examines issues of ethical innovation like these, focusing on humane technology and our relationship to the built environment. Center Director Elizabeth Langland and Associate Director Gaymon Bennett gave insight on the ethicality of facial recognition technology and what this news means for the future of power and privacy on social media.
Editor’s note: Some answers have been edited for length and/or clarity.
Question: What ethical implications can facial recognition have?
Elizabeth Langland: Facial recognition isn’t always accurate. We have to underline that it’s very good with white men, very poor on Black women and not so great on white women, even. There’s stories about people when a man is arrested in front of his children because of a mistake in facial recognition, but the police trust the facial recognition technology more than the individual and he’s imprisoned for several hours until they realize. When it goes wrong, it really can go wrong and have serious consequences for people’s lives.
Gaymon Bennett: We often talk about things like facial recognition software and other kinds of data aggregation as a privacy problem, and no doubt, in a certain sense, it is, but really what’s at stake as far as I’m concerned is ability to exercise power. This is about powerful actors being able to manage the behavior of people that they are in a position to govern. As a culture, we trust technology. We think technology is neutral. But it’s neutral technology amplifying real-world stuff, and the real-world stuff is racist and sexist. So now this hyper-powerful machinery is chewing on this.
Q: What impact, if any, do you think this move from Facebook will have on the use of facial recognition by other tech giants?
Langland: I actually think this was an easy thing for Facebook to do that doesn’t impact in any way their financial model and will have no impact on the use of facial recognition by other tech giants.
Bennett: I’m happy that we now have a real-world, high-profile example where we can point to something and say, “See, sometimes it’s just worth shutting down.” However, in the press release Facebook circulated, they imply that tech is kind of a neutral tool and it can be good sometimes, it can be bad sometimes. I don’t buy this. Facebook has 3 billion users, so everything they do has this unbelievable effect on the world.
Q: Do you feel this is a meaningful step in the humane technology movement?
Langland: There’s kind of a moratorium right now on facial recognition technology, and Facebook is clearly participating in that. But there are no laws. That’s one of the things that concerns the ACLU and other organizations that are focused on human rights. Congress has taken no action. All of these things are so new, so I think people don’t want to step wrong, but meanwhile, there’s this delay in the system that leaves people very open to these technologies and to abuse. One of the things about Facebook is that they’re getting a whole circle of friends and acquaintances, or even just contacts. Google has data on you, but they don’t know about your circle. They can’t then start affecting the other people you’re in contact with. Facebook is able to gather things like that together.
Bennett: Anytime Facebook deletes data, it’s important. A data-centered economy is an economy that’s based on the ability of people to spy on you so that they can manipulate your behavior. These are major social systems that have now become cultural systems. We have a whole set of habits that we do together about how we live together in an age of digitally mediated information and handheld devices, all the way from the physiology of the device. You glance down at your phone, you’re sitting in a restaurant, somebody else’s phone buzzes and you look at your phone — we have this very intimate relationship to the whole networks of data that wrap themselves around us.
Facebook has a pretty unusual position within this economy. Facebook can tie data to real people. This raises lots of red flags. Is this regulated? No. How should it be regulated? Who knows? But Facebook was always kind of a kingmaker in this. Google can tie it back to patterns, whereas Facebook was tying it to a person. The idea that they can then manipulate a whole chain of relationships was always more significant.
Q: How could facial recognition be used in a more ethical way?
Langland: In Scotland they used it because of COVID in a lunch line so that kids could go through and get charged for their lunch based on their face without handing cards. I guess I could see going to an airport and if they could just use it to get you on a plane without having to handle things. But of course, once they have it, it’s subject to abuse. You can’t get away from it once they have all these faces identified. How do you limit that? Can you really legislate limits on its use?
Bennett: I think there are uses that lead to efficiencies, and I think there are uses that lead to entertainment. I think that any powerful technologies whose rationales are efficiency or entertainment will always lead to trouble. There are real questions around facial recognition software and the blind. I can’t speak to whether or not that should be counted as a good thing, because I haven’t asked this question to people I know with disabilities or within disability studies, but it would also be interesting to know whether or not they think that facial recognition is a good idea. For the foreseeable future, who knows if this technology could ever be used for good or not? However, I don’t think we should take the position that we shouldn’t innovate around potentially powerful technologies simply because we don’t know whether or not they can lead to harm.
Top image by John Stobbe/The College of Liberal Arts and Sciences