Maximizing paper clips

We won’t get the chance to worry about artificial general intelligence if we don’t deal with the problems we have in the present.

By Mike Loukides
June 10, 2019
Paper clips Paper clips (source: Pixabay)

In What’s the Future, Tim O’Reilly argues that our world is governed by automated systems that are out of our control. Alluding to The Terminator, he says we’re already in a “Skynet moment,” dominated by artificial intelligence that can no longer be governed by its “former masters.” The systems that control our lives optimize for the wrong things: they’re carefully tuned to maximize short-term economic gain rather than long-term prosperity. The “flash crash” of 2010 was an economic event created purely by the software that runs our financial systems going awry. However, the real danger of the Skynet moment isn’t what happens when the software fails, but when it is working properly: when it’s maximizing short-term shareholder value, without considering any other aspects of the world we live in. Even when our systems are working, they’re maximizing the wrong function.

Charlie Stross makes a similar point in “Dude you broke the future,” arguing that modern corporations are “paper clip maximizers.” He’s referring to Nick Bostrom’s thought experiment about what could go wrong with an artificial general intelligence (AGI). If told to maximize the process of making paper clips, it could decide that humans were inessential. It was told to make paper clips, lots of them, and nothing is going to stop it. Like O’Reilly, Stross says the process is already happening: we’re already living in a world of “paper clip maximizers.” Businesses maximize stock prices without regard for cost, whether that cost is human, environmental, or something else. That process of optimization is out of control—and may well make our planet uninhabitable long before we know how to build a paper clip-optimizing AI.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

The paper clip maximizer is a provocative tool for thinking about the future of artificial intelligence and machine learning–though not for the reasons Bostrom thinks. As O’Reilly and Stross point out, paper clip maximization is already happening in our economic systems, which have evolved a kind of connectivity that lets them work without oversight. It’s already happening in our corporations, where short-term profit creates a world that is worse for everyone. Automated trading systems largely predate modern AI, though they have no doubt incorporated it. Business systems that optimize profit—well, they’re old-fashioned human wetware, collected in conference rooms and communicating via the ad-hoc neural network of economic exchange.

What frustrates me about Bostrom’s paper clip maximizer is that focusing on problems we might face in some far-off future diverts attention from the problems we’re facing now. We don’t have–and may never have–an artificial general intelligence, or even a more limited artificial intelligence that will destroy the world by maximizing paper clips. As Andrew Ng has said, we’re being asked to worry about overpopulation on Mars. We have more immediate problems to solve. What we do have are organizations that are already maximizing their own paper clips, and that aren’t intelligent by any standard. That’s a concrete problem we need to deal with now. Talking about future paper clips might be interesting or thrilling, but in reality, it’s a way of avoiding dealing with our present paper clips. As Stross points out, Elon Musk is one of the recent popularizers of paper clip anxiety; yet, he has already built his own maximizers for batteries and space flights. It’s much easier to wax philosophical about a hypothetical problem than to deal with a planet that is gradually overheating. It’s a lot more fun, and a lot less threatening, to think about the dangers of a hypothetical future AI than to think about the economic, political, sociological, and environmental problems that face us now—even if those two sets of problems are really the same.

The argument that Stross and O’Reilly make is central to how we think about AI ethics—and not just AI ethics, but business ethics. I’m not terribly concerned about the things that could go wrong with an artificial general intelligence, at least in part because we won’t get the chance to worry about AGI if we don’t deal with the problems we have in the present. And if we do deal with the problems facing us now, Tim O’Reilly’s Skynet moment and Stross’s present-day paper clip maximizers, we will inevitably develop the tools we need to think about and manage the future’s paper clip maximizers. Getting our present systems back under control and contributing to human welfare is the only way to learn how to keep our future systems, whatever they might be, working for our collective good.

I can think of no better way to prepare for the future’s problems than to solve the present’s.

Post topics: AI & ML
Post tags: Commentary, O'Reilly Radar Analysis
Share:

Get the O’Reilly Artificial Intelligence Newsletter

Get the O’Reilly Artificial Intelligence Newsletter