The ethics of face recognition

We need AI researchers who are actively trying to defeat AI systems and exposing their inadequacies.

By Mike Loukides
December 13, 2016
Man in a Café, by Juan Gris, 1912. Man in a Café, by Juan Gris, 1912. (source: Wikimedia Commons)

A few weeks ago, I wrote a post on the ethics of artificial intelligence. Since then, we’ve been presented with an excellent example to reflect on: the use of face recognition to identify people likely to commit crimes. (There have been a number of articles about this research; I’ll only link to this one.)

In my post, I said that we need to discuss what kind of society we want to build. I’m reasonably confident that, even under the worst societal conditions, we don’t want a society where you can be imprisoned because your eyes are set too closely together. The article in New Scientist shows that researchers are making the right objections: the training data for crimnals and non-criminals was taking from two different sources; ethnicity issues may be at play; and that we’re in danger making AI into “21st century phrenology,” or “mathwashing.”

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

I also say that an AI developer can choose what projects to work on, but that it’s important that research not go behind closed doors, becoming opaque to the public and leaving everyone outside of those doors vulnerable to whatever happens inside. That leads me to suggest going a few steps further. While researchers and developers can certainly choose not to participate in projects they object to, there are useful ways to go beyond non-involvement:

  • Some researchers have worked on ways to use hair style, coloring, and other cosmetics to defeat face recognition. That’s certainly a constantly escalating battle: what works now probably won’t work a year from now. But more important, it requires understanding what face recognition is doing, how it works, and making that public knowledge.
  • Abe Gong’s work on COMPAS and  Cathy O’Neil’s work on data-driven teacher evaluation expose the machinery by which math-driven bias works. Gong’s distinction between the statistical and human definitions of “bias” is particularly important: it’s easy to be statistically unbiased while humanly unfair. O’Neil points out that it’s easy to create systems in which you can only win by gaming the system, and that people who try to play fair are inevitably losers. We need many more researchers doing work like this: we need to understand how machine learning and AI are used, what the consequences are, and make that public knowledge.

So, researchers who opt out can also choose to actively subvert the system, or they can work to expose the flaws built into the system. Both functions are necessary.

As New Scientist points out, “the majority of U.S. police departments using face recognition do little to ensure that the software is accurate.” Police departments have neither the expertise nor the inclination to critically evaluate software that claims to make their jobs easier. “This is magic that will make your job easier” is a tempting sales pitch for people who are already doing a hard job. It’s way too easy for an uninformed official to fantasize about AI systems that will detect terrorists. It takes someone who isn’t ignorant about AI to point out the problems with such a proposal, not the least of which is that the number of terrorists is so small that it would be impossible to build a good data set for training. And even with good training data, it’s very hard to imagine a system with fewer than 5% false positives (roughly 16 million Americans, roughly 370 million people worldwide)—and such an error-prone system would be worse than useless.

Staying away from problem topics is never an answer; more than ever, we need AI researchers who are committed to building the future we want, rather than the future we’re likely to get. That includes researchers who are actively trying to defeat AI systems as well as researchers who are exposing their inadequacies. Neither group can work from a position of ignorance. Doing so guarantees that we will be the victims, rather than the beneficiaries, of AI.

Post topics: AI & ML
Post tags: Commentary
Share:

Get the O’Reilly Artificial Intelligence Newsletter

Get the O’Reilly Artificial Intelligence Newsletter