The problem with building a “fair” system

The ability to appeal may be the most important part of a fair system, and it's one that isn't often discussed in data circles.

By Mike Loukides
January 1, 2018
The problem with building a fair system Gavel (source: Pixabay)

Fairness is a slippery concept. We haven’t yet gotten past the first-grade playground: what’s “fair” is what’s fair to me, not necessarily to everyone else. That’s one reason we need to talk about ethics in the first place: to move away from the playground’s “that’s not fair” (someone has my favorite toy) to a statement about justice.

There have been several important discussions of fairness recently. Cody Marie Wild’s “Fair and Balanced? Thoughts on Bias in Probabilistic Modeling” and Kate Crawford’s NIPS 2017 keynote, “The Trouble with Bias,” do an excellent job of discussing how and why bias keeps reappearing in our data-driven systems. Neither of these papers pretend to have any final answer to the problem of fairness. Nor do I. I would like to expose some of the problems, and suggest some directions for making progress toward the elusive goal of “fairness.”

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

The nature of data itself presents a fundamental problem. “Fairness” is aspirational: we want to be fair, we hope to be fair. Fairness has much more to do with breaking away from our past and transcending it than with replicating it. But data is inevitably historical, and it reflects all the prejudices and biases of the past. If our systems are driven by data, can they possibly be “fair”? Or do they just legitimize historical biases under the guise of science and mathematics? Is it possible to make fair systems out of data that reflects historical biases? I’m uncomfortable with the idea that we can tweak the outputs of a data-driven system to compensate for biases; my instincts tell me that approach will lead to pain and regret. Some research suggests that de-biasing the input data may be a better approach, but it’s still early.

It is easier to think about fairness when there’s only one dimension. Does using a system like COMPASS lead to harsher punishments for blacks than non-blacks? Was Amazon’s same-day delivery service initially offered only in predominantly white neighborhoods? (Amazon has addressed this problem.) Those questions are relatively easy to evaluate. But in reality, these problems have many dimensions. A machine learning system that is unfair to people of color might also be unfair to the elderly or the young; it might be unfair to people without college degrees, women, and the handicapped. We actually don’t know; for the most part, we haven’t asked those questions. We do know that AI is good at finding groups with similar characteristics (such as “doesn’t have a college degree”), even when that characteristic isn’t explicitly in the data. I doubt that Amazon’s same-day delivery service intentionally excluded black neighborhoods; what does “intention” even mean here? Software can’t have intentions. Developers and managers can, though their intention was certainly to maximize sales while minimizing costs, not to build an unfair system. But if you build a system that is optimized for high-value Amazon customers, that system will probably discriminate against low-income neighborhoods, and those neighborhoods will “just happen” to include many black neighborhoods. Building an unfair system isn’t the intent, but it is a consequence of machine learning’s innate ability to form classes.

Going beyond the issue of forming (and impacting) groups in ways that are unintended, we have to ask ourselves what a fair solution would mean. We can test many dimensions for fairness: race, age, gender, disability, nationality, religion, wealth, education, and many more. Wild makes the important point that, as questions about disparate impact cross different groups, we’re subdividing our training data into smaller and smaller classes, and that a proliferation of classes, with less data in each class, is itself a recipe for poor performance. But there’s a problem that’s even more fundamental. Is it possible for a single solution to be fair for all groups? We might be able to design a solution that’s fair to two or three groups, but as the number of groups explodes, I doubt it. Do we care about some kinds of discrimination more than others? Perhaps we do; but that’s a discussion that is bound to be uncomfortable.

We are rightly uncomfortable with building dimensions like race and age into our models. However, there are situations in which we have no choice. We don’t want race to be a factor in real estate or criminal justice, nor do we want our systems to be finding their own proxies for race, such as street addresses. But what about other kinds of decisions? I’ve recently read several articles about increased mortality in childbirth for black women, of which the best appeared in Pro Publica. Mortality for black women is significantly higher than for white women, even when you control for socio-economics: even when everything is equal, black women are at a much higher risk than white women, and nobody knows why. This means that, if you’re designing a medical AI system, you have to take race into account. That’s the only way to ensure that the system has a chance to consider the additional risks that black women face. It’s also the only way to ensure that the system might be able to determine the factors that actually affect mortality.

Is there a way out of this mess? There are two ways to divide a cake between two children. We can get out micrometers and scales, measure every possible dimension of the cake, and cut the cake so that there are two exactly equal slices. That’s a procedural solution; it describes a process that’s intended to be fair, but the definition of fairness is external to the process. Or we can give the knife to one child, let them make the cut, then let the other choose. This solution builds fairness into the process.

Computational systems (and software developers) are inherently more comfortable with the first solution. We’re very good at doing computation with as many significant digits as you’d like. But cutting the cake ever more precisely isn’t likely to be the answer we want. Slicing more precisely only gives us the appearance of fairness—or, more aptly put, something that we can justify as “fair,” but without putting an end to the argument. (When I was growing up, the argument typically wasn’t about size, but who got more pieces of cherry. Or the icing flower.) Can we do better? Can we come up with solutions that leave people satisfied, regardless of how the cake is cut?

We’re ultimately after justice, not fairness. And by stopping with fairness, we are shortchanging the people most at risk. If justice is the real issue, what are we missing? In a conversation, Anna Lauren Hoffmann pointed out that often the biggest difference between having privilege and being underprivileged isn’t formal; it’s practical. That is, people can formally have the same rights or opportunities but differ in their practical capacity to seek redress for harms or violations of those things. Having privilege, for example, means having the resources to appeal wrongs if one is short-changed by an unfair system. They may have the time or economic bandwidth to hire a lawyer, spend hours on the phone, contact elected officials, do what it takes. If you are underprivileged, these things can be effectively out of reach. We need to make systems that are more fair (whatever that might mean); but, recognizing that our systems aren’t fair, and can’t be, we need to provide mechanisms to repair the damage they do. And we need to make sure those systems are easily accessible, regardless of privilege.

The right to appeal builds fairness into the system, rather than having fairness as an external criterion. It’s similar to letting one child cut the cake, and the other choose. It’s only similar because the appeal process can itself be unfair, but it’s a huge step forward. When an appeal is possible, and available to all, you don’t need a perfect algorithm.

So, can we get to some conclusions? Being fair is hard, algorithmic or otherwise. Even deciding what we mean by “fair” is difficult. Do not take that to mean that we should give up. But do take that to mean that we shouldn’t expect easy, simple solutions. We desperately need to have a discussion about “fairness” and what that means. That discussion needs to be broad and inclusive. And we may need to conclude that “fairness” is contextual, and isn’t the same in all situations.

As with everything else, machine learning can help us to be fair. But we’re better off using machine learning to understand what’s unfair about our data, rather than trusting our systems to make data-driven decisions about what “fair” should be. While our systems can be assistants or even collaborators, we do not want to hand off responsibility to them. When we treat machine learning systems as oracles, rather than as assistants, we are headed in the wrong direction. We can’t trick ourselves into thinking that a decision is fair because it is algorithmic. We can’t afford to “mathwash” important decisions.

Finally, however we make decisions, we need to provide appeal mechanisms that are equally available to all—not just to those who can afford a lawyer, or who can spend hours listening to music on hold. The ability to appeal may be the most important part of a fair system, and it’s one that isn’t often discussed in data circles. The ability to appeal means that we don’t have to design systems that get it right all the time—and that’s important because our systems most certainly won’t get it right all the time. Fairness ultimately has less to do with the quality of our decisions than the ability to get a bad decision corrected.

Post topics: AI & ML
Share:

Get the O’Reilly Artificial Intelligence Newsletter

Get the O’Reilly Artificial Intelligence Newsletter