Automating ethics

Machines will need to make ethical decisions, and we will be responsible for those decisions.

By Mike Loukides
April 22, 2019
Automating ethics

We are surrounded by systems that make ethical decisions: systems approving loans, trading stocks, forwarding news articles, recommending jail sentences, and much more. They act for us or against us, but almost always without our consent or even our knowledge. In recent articles, I’ve suggested the ethics of artificial intelligence itself needs to be automated. But my suggestion ignores the reality that ethics has already been automated: merely claiming to make data-based recommendations without taking anything else into account is an ethical stance. We need to do better, and the only way to do better is to build ethics into those systems. This is a problematic and troubling position, but I don’t see any alternative.

The problem with data ethics is scale. Scale brings a fundamental change to ethics, and not one that we’re used to taking into account. That’s important, but it’s not the point I’m making here. The sheer number of decisions that need to be made means that we can’t expect humans to make those decisions. Every time data moves from one site to another, from one context to another, from one intent to another, there is an action that requires some kind of ethical decision.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Gmail’s handling of spam is a good example of a program that makes ethical decisions responsibly. We’re all used to spam blocking, and we don’t object to it, at least partly because email would be unusable without it. And blocking spam requires making ethical decisions automatically: deciding that a message is spam means deciding what other people can and can’t say, and who they can say it to.

There’s a lot we can learn from spam filtering. It only works at scale; Google and other large email providers can do a good job of spam filtering because they see a huge volume of email. (Whether this centralization of email is a good thing is another question.) When their servers see an incoming message that matches certain patterns across their inbound email, that message is marked as spam and sorted into recipients’ spam folders. Spam detection happens in the background; we don’t see it. And the automated decisions aren’t final: you can check the spam folder and retrieve messages that were spammed by mistake, and you can mark messages that are misclassified as not-spam.

Credit card fraud detection is another system that makes ethical decisions for us. Most of us have had a credit card transaction rejected and, upon calling the company, found that the card had been cancelled because of a fraudulent transaction. (In my case, a motel room in Oklahoma.) Unfortunately, fraud detection doesn’t work as well as spam detection; years later, when my credit card was repeatedly rejected at a restaurant that I patronized often, the credit card company proved unable to fix the transactions or prevent future rejections. (Other credit cards worked.) I’m glad I didn’t have to pay for someone else’s stay in Oklahoma, but an implementation of ethical principles that can’t be corrected when it makes mistakes is seriously flawed.

So, machines are already making ethical decisions, and often doing so badly. Spam detection is the exception, not the rule. And those decisions have an increasingly powerful effect on our lives. Machines determine what posts we see on Facebook, what videos are recommended to us on YouTube, what products are recommended on Amazon. Why did Google News suddenly start showing me alt-right articles about a conspiracy to deny Cornell University students’ inalienable right to hamburgers? I think I know; I’m a Cornell alum, and Google News “thought” I’d be interested. But I’m just guessing, and I have precious little control over what Google News decides to show me. Does real news exist if Google or Facebook decides to show me burger conspiracies instead? What does “news” even mean if fake conspiracy theories are on the same footing? Likewise, does a product exist if Amazon doesn’t recommend it? Does a song exist if YouTube doesn’t select it for your playlist?

These data flows go both ways. Machines determine who sees our posts, who receives data about our purchases, who finds out what websites we visit. We’re largely unaware of those decisions, except in the most grotesque sense: we read about (some of) them in the news, but we’re still unaware of how they impact our lives.

Don’t misconstrue this as an argument against the flow of data. Data flows, and data becomes more valuable to all of us as a result of those flows. But as Helen Nissenbaum argues in her book Privacy in Context, those flows result in changes in context, and when data changes context, the issues quickly become troublesome. I am fine with medical imagery being sent to a research study where it can be used to train radiologists and the AI systems that assist them. I’m not OK with those same images going to an insurance consortium, where they can become evidence of a “pre-existing condition,” or to a marketing organization that can send me fake diagnoses. I believe fairly deeply in free speech, so I’m not too troubled by the existence of conspiracy theories about Cornell’s dining service; but let those stay in the context of conspiracy theorists. Don’t waste my time or my attention.

I’m also not suggesting that machines make ethical choices in the way humans do: ultimately, humans bear responsibility for the decisions their machines make. Machines only follow instructions, whether those instruction are concrete rules or the arcane computations of a neural network. Humans can’t absolve themselves of responsibility by saying, “The machine did it.” We are the only ethical actors, even when we put tools in place to scale our abilities.

If we’re going to automate ethical decisions, we need to start from some design principles. Spam detection gives us a surprisingly good start. Gmail’s spam detection assists users. It has been designed to happen in the background and not get into the user’s way. That’s a simple but important statement: ethical decisions need to stay out of the user’s way. It’s easy to think that users should be involved with these decisions, but that defeats the point: there are too many decisions, and giving permission each time an email is filed as spam would be much worse than clicking on a cookie notice for every website you visit. But staying out of the user’s way has to be balanced against human responsibility: ambiguous or unclear situations need to be called to the users’ attention. When Gmail can’t decide whether or not a message is spam, it passes it on to the user, possibly with a warning.

A second principle we can draw from spam filtering is that decisions can’t be irrevocable. Emails tagged as spam aren’t deleted for 30 days; at any time during that period, the user can visit the spam folder and say “that’s not spam.” In a conversation, Anna Lauren Hoffmann said it’s less important to make every decision correctly than to have a means of redress by which bad decisions can be corrected. That means of redress must be accessible by everyone, and it needs to be human, even though we know humans are frequently biased and unfair. It must be possible to override machine-made decisions, and moving a message out of the spam folder overrides that decision.

When the model for spam detection is systematically wrong, users can correct it. It’s easy to mark a message as “spam” or “not spam.” This kind of correction might not be appropriate for more complex applications. For example, we wouldn’t want real estate agents “correcting” a model to recommend houses based on race or religion; and we could even discuss whether similar behavior would be appropriate for spam detection. Designing effective means of redress and correction may be difficult, and we’ve only dealt with the simplest cases.

Ethical problems arise when a company’s interest in profit comes before the interests of the users. We see this all the time: in recommendations designed to maximize ad revenue via “engagement”; in recommendations that steer customers to Amazon’s own products, rather than other products on their platform. The customer’s interest must always come before the company’s. That applies to recommendations in a news feed or on a shopping site, but also how the customer’s data is used and where it’s shipped. Facebook believes deeply that “bringing the world closer together” is a social good but, as Mary Gray said on Twitter, when we say that something is a “social good,” we need to ask: “good for whom?” Good for advertisers? Stockholders? Or for the people who are being brought together? The answers aren’t all the same, and depend deeply on who’s connected and how.

Many discussions of ethical problems revolve around privacy. But privacy is only the starting point. Again, Nissenbaum clarifies that the real issue isn’t whether data should be private; it’s what happens when data changes context. None of these privacy tools could have protected the pregnant Target customer who was outed to her parents. The problem wasn’t with privacy technology, but with the intention: to use purchase data to target advertising circulars. How can we control data flows so those flows benefit, rather than harm, the user? “Datasheets for datasets” is a proposal for a standard way to describe data sets; model cards proposes a standard way to describe models. While neither of these is a complete solution, I can imagine a future version of these proposals that standardizes metadata so data routing protocols can determine which flows are appropriate and which aren’t. It’s conceivable that the metadata for data could describe what kinds of uses are allowable (extending the concept of informed consent), and metadata for models could describe how data might be used. That’s work that hasn’t been started, but it’s work that needed.

Whatever solutions we end up with, we must not fall in love with the tools. It’s entirely too easy for technologists to build some tools and think they’ve solved a problem, only to realize the tools have created their own problems. Differential privacy can safeguard personal data by adding random records to a database without changing its statistical properties, but it can also probably protect criminals by hiding evidence. Homomorphic encryption, which allows systems to do computations on encrypted data without first decrypting it, can probably be used to hide the real significance of computations. Thirty years of experience on the internet has taught us that routing protocols can be abused in many ways; protocols that use metadata to route data safely can no doubt be attacked. It’s possible to abuse or to game any solution. That doesn’t mean we shouldn’t build solutions, but we need to build them knowing they aren’t bulletproof, that they’re subject to attack, and that we are ultimately responsible for their behavior.

Our lives are integrated with data in ways our parents could never have predicted. Data transfers have gone way beyond faxing a medical record or two to an insurance company, or authorizing a credit card purchase over an analog phone line. But as Thomas Wolfe wrote, we can’t go home again. There’s no way back to some simpler world where your medical records were stored on paper in your doctor’s office, your purchases were made with cash, and your smartphone didn’t exist. And we wouldn’t want to go back. The benefits of the new data-rich world are immense. Yet, we live in a “data smog” that contains everyone’s purchases, everyone’s medical records, everyone’s location, and even everyone’s heart rate and blood pressure.

It’s time to start building the systems that will truly assist us to manage our data. These machines will need to make ethical decisions, and we will be responsible for those decisions. We can’t avoid that responsibility; we must take it up, difficult and problematic as it is.

Post topics: AI & ML
Share:

Get the O’Reilly Artificial Intelligence Newsletter

Get the O’Reilly Artificial Intelligence Newsletter