Philosophers Lead Academics’ Effort To Restrict Facial Recognition Technology


If you’re like most people, you probably haven’t been thinking much about facial recognition technology. Philosopher Evan Selinger (Rochester Institute of Technology), has, and he thinks we all should be, too, for it poses a serious threat to human welfare. Now he, Peter Asaro (a philosopher at The New School), and others have written an open letter to Amazon CEO Jeff Bezos and Amazon’s executives in charge of artificial intelligence objecting to the firm providing its facial recognition technology to the government.

Hardly a Luddite or a proponent of the (in my view hopeless) precautionary principle, Selinger knows that some dangerous technologies are sufficiently beneficial, and so “are worth preserving with rules that mitigate harm but accept reasonable levels of risk.” However, he and Woodrow Hartzog (Northeastern) argue:

Facial recognition systems are not among these technologies. They can’t exist with benefits flowing and harms adequately curbed. That’s because the most-touted benefits of facial recognition would require implementing oppressive, ubiquitous surveillance systems and the kind of loose and dangerous data practices that civil rights and privacy rules aim to prevent. Consent rules, procedural requirements, and boilerplate contracts are no match for that kind of formidable infrastructure and irresistible incentives for exploitation.

The technology is becoming more widespread, with one of its most prominent examples being Amazon’s “Rekognition.”* Amazon has been providing government agencies with Rekognition, prompting objections from the American Civil Liberties Union (ACLU):

Rekognition is a powerful surveillance system readily available to violate rights and target communities of color. 

Amazon states that Rekognition can identify people in real-time by instantaneously searching databases containing tens of millions of faces. Amazon offers a “person tracking” feature that it says “makes investigation and monitoring of individuals easy and accurate” for “surveillance applications.” Amazon says Rekognition can be used to identify “all faces in group photos, crowded events, and public places such as airports”—at a time when Americans are joining public protests at unprecedented levels.

Amazon also encourages the use of Rekognition to monitor “people of interest,” raising the possibility that those labeled suspicious by governments—such as undocumented immigrants or Black activists—will be targeted for Rekognition surveillance. Amazon has even advertised Rekognition for use with officer body cameras, which would fully transform those devices into mobile surveillance cameras aimed at the public.

People should be free to walk down the street without being watched by the government. Facial recognition in American communities threatens this freedom. In overpoliced communities of color, it could effectively eliminate it. The federal government could use this facial recognition technology to continuously track immigrants as they embark on new lives. Local police could use it to identify political protesters captured by officer body cameras. With Rekognition, Amazon delivers these dangerous surveillance powers directly to the government.

As Selinger and Hartzog note, surveillance via facial recognition technology is itself oppressive, but is also a tool the government is likely to use (or allow to be used) to harm people and violate rights through, for example, “rampant, nontransparent, targeted drone strikes; overreaching social credit systems that exercise power through blacklisting; and relentless enforcement of even the most trivial of laws, like jaywalking and failing to properly sort your garbage cans.”

They also think that “technology creep,” an idea developed by Selinger and Brett Frishchmann (Villanova) in Re-Engineering Humanity, is a reasonable worry to have regarding this technology:

once the infrastructure for facial recognition technology grows to a certain point, with advances in machine learning and A.I. leading the way, its use will become so normalized that a new common sense will be formed. People will expect facial recognition technology to be the go-to tool for solving more and more problems, and they’ll have a hard time seeing alternatives as anything but old-fashioned and outdated. This is how “techno-social engineering creep” works.

In an open letter to Amazon executives, Asaro, Selinger, Hartzog, and others call for the firm to:

  • Stop supplying government and law enforcement agencies with facial recognition technology.
  • Cancel its cloud services contracts with Palantir and other intermediary companies that provide data analytics and AI services to police or militaries.
  • Establish ethical guidelines that prohibit the weaponization and militarization of this and other technologies that threaten privacy, civil and human rights, and establish transparency and accountability mechanisms to ensure those guidelines are implemented.

You can read the rest of the letter and add your name to the list of signatories here.

Stefaan De Croock, “Elsewhere”

USI Switzerland Philosophy
Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jack
Jack
5 years ago

Perhaps philosophers could lead efforts to examine the “more is better” relationship with knowledge which is the source of the ever accelerating emergence of ever greater powers.

Such a broader focus would be more rational, efficient, and philosophical than trying to deal with each problematic technology on a case by case basis. If the knowledge explosion continues to accelerate, which is inevitable if we don’t examine it more closely, it’s not even going to be possible to approach problematic cases on a case by case basis, as there will be too many problematic cases to keep track of.