The Wrong Question

What questions should we be asking about the future of social media? “Free Speech” isn’t it.

By Mike Loukides
February 9, 2021
Broken screen Broken screen (source: Glavo via Pixabay)

“If they can get you asking the wrong questions, they don’t have to worry about answers.”


Thomas Pynchon, Gravity’s Rainbow

The deplatforming of Donald Trump and his alt-right coterie has led to many discussions of free speech.  Some of the discussions make good points, most don’t, but it seems to me that all of them miss the real point.  We shouldn’t be discussing “speech” at all; we should be discussing the way social platforms amplify certain kinds of speech.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

What is free speech, anyway?  In a strictly legal sense, “free speech” is only a term that makes sense in the context of government regulation. The First Amendment to the US constitution says that the government can’t pass a law that restricts your speech. And neither Twitter nor Facebook are the US government, so whatever they do to block content isn’t a “free speech” issue, at least strictly interpreted.

Admittedly, that narrow view leaves out a lot. Both the right and the left can agree that we don’t really want Zuck or @jack determining what kinds of speech are legitimate. And most of us can agree that there’s a time when abstract principles have to give way to concrete realities, such as terrorists storming the US capitol building. That situation resulted from years of abusive speech that the social platforms had ignored, so that when the corporate power finally stepped in, their actions were too little, too late.

But as I said, the focus on “free speech” misframes the issue. The important issue here isn’t speech itself; it’s how and why speech is amplified—an amplification that can be used to drown out or intimidate other voices, or to selectively amplify voices for reasons that may be well-intended, self-interested, or even hostile to the public interest. The discussion we need, the discussion of amplification and its implications, has largely been supplanted by arguments about “free speech.”

In the Third Amendment, the US Constitution also guarantees a “free press.” A free press is important because the press has the power of replication: of taking speech and making it available more broadly. In the 18th, 19th, and 20th centuries, that largely meant newspapers, which had the ability to reproduce tens of thousands of copies overnight. But freedom of the press has an important limitation. Anyone can talk, but to have freedom of the press you have to have a press–whether that’s a typewriter and a mimeograph, or all the infrastructure of a publisher like The New York TImes, CNN, or Fox News. And being a “press” has its own constraints: an editorial staff, an editorial policy, and so on. Because they’re in the business of replication, it’s probably more correct to think of Twitter and Facebook as exercising “press” functions.

But what is the editorial function for Facebook, Twitter, YouTube, and most other social media platforms? There isn’t an editor who decides whether your writing is insightful. There’s no editorial viewpoint. There’s only the shallowest attempt to verify facts. The editorial function is driven entirely by the desire to increase engagement, and this is done algorithmically. And what algorithms have “learned” perhaps isn’t surprising: showing people content that makes them angry is the best way to keep them coming back for more. And the more they come back, the more ads are clicked, and the more income flows in. Over the past few years, that editorial strategy has certainly played into the hands of the alt-right and neo-Nazi groups, who learned quickly how to take advantage of it. Nor have left-leaning polemicists missed the opportunity. The battle of overheated rhetoric has cheapened the public discourse and made consensus almost unattainable. Indeed, it has made attention itself unattainable: and, as Peter Wang has argued, scarcity of attention–particularly the “synchronous attention of a group”–is the biggest problem we face, because it rules out thoughtful consensus.

Again, that’s been discussed many times over the past few years, but we seem to have lost that thread. We’ve had reproduction—we’ve had a press—but with the worst possible kind of editorial values. There are plenty of discussions of journalistic values and ethics that might be appropriate; but an editorial policy that has no other value than increasing engagement doesn’t even pass the lowest bar. And that editorial policy has left the user communities of Facebook, Twitter, YouTube, and other media vulnerable to deafening feedback loops.

Social media feedback loops can be manipulated in many ways: by automated systems that reply or “like” certain kinds of content, as well as by individual users who can also reply and “like” by the thousands.  And those loops are aided by the platforms’ recommendation systems: either by recommending specific inflammatory posts, or by recommending that users join specific groups. An internal Facebook report showed that, by their own reckoning, 70% of all “civic” groups on Facebook contained “hate speech, misinformation, violent rhetoric, or other toxic behavior”; and the company has been aware of that since 2016.

So where are we left?  I would rather not have Zuck and @jack determine what kinds of speech are acceptable. That’s not the editorial policy we want.  And we certainly need protections for people saying unpopular things on social media; eliminating those protections cuts both ways. What needs to be controlled is different altogether: it’s the optimization function that maximizes engagement, measured by time spent on the platform. And we do want to hold Zuck and @jack responsible for that optimization function, just as we want the publisher of a newspaper or a television news channel to be responsible for the headlines they write and what they put on their front page.

Simply stripping Section 230 protection strikes me as irrelevant to dealing with what Shoshana Zuboff terms an “epistemic coup.” Is the right solution to do away with algorithmic engagement enhancement entirely?  Facebook’s decision to stop recommending political groups to users is a step forward. But they need to go much farther in stripping algorithmic enhancement from their platform. Detecting bots would be a start; a better algorithm for “engagement,” one that promotes well-being rather than anger, would be a great ending point. As Apple CEO Tim Cook, clearly thinking about Facebook, recently said, “A social dilemma cannot be allowed to become a social catastrophe…We believe that ethical technology is technology that works for you… It’s technology that helps you sleep, not keeps you up. It tells you when you’ve had enough. It gives you space to create or draw or write or learn, not refresh just one more time.”  This reflects Apple’s values rather than Facebook’s (and one would do well to reflect on Facebook’s origins at Harvard); but it is leading towards the right question.

Making people angry might increase shareholder value short-term. But that probably isn’t a sustainable business; and if it is, it’s a business that does incredible social damage. The “solution” isn’t likely to be legislation; I can’t imagine laws that regulate algorithms effectively, and that can’t be gamed by people who are willing to work hard to game them. I guarantee that those people are out there. We can’t say that the solution is to “be better people,” because there are plenty of people who don’t want to be better; just look at the reaction to the pandemic. Just look at the frustration of the many Facebook and Twitter employees who realized that the time to lay aside abstract principles like “free speech” was long before the election.

We could perhaps return to the original idea of “incorporation,” when incorporation meant a “body created by law for the purpose of attaining public ends through an appeal to private interests”–one of Zuboff’s solutions is to “tie data collection to fundamental rights and data use to public services.” However, that would require legal bodies that made tough decisions about whether corporations were indeed working towards “public ends.”  As Zuboff points out earlier in her article, it’s easy to look to antitrust, but the Sherman Antitrust Act was largely a failure.  Would courts ruling on “public ends” be any different?

In the end, we will get the social media we deserve. And that leads to the right question. How do we build social media that maintains social good, rather than destroying it?  What kinds of business models are needed to support that kind of social good, rather than merely maximizing shareholder value?

Post topics: Intelligence matters: Artificial intelligence and algorithms
Post tags: Commentary
Share: