Is AI the Paperclip?
Scale at all costs.
In a paper published in 2003, the philosopher Nick Bostrom sketched out a thought experiment aimed at illustrating an existential risk that artificial intelligence might eventually pose to humanity. An advanced AI is given, by its human programmers, the objective of optimizing the production of paperclips. The machine sets off in monomaniacal pursuit of the objective, its actions untempered by common sense or ethical considerations. The result, Bostrom wrote, is “a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.” It destroys everything, including its programmers, in a mad rush to gather resources for paperclip production.
Bostrom went on to refine his “paperclip maximizer” thought experiment in subsequent writings and interviews, and it soon became a touchstone in debates about AI. Eminences as diverse as Stephen Hawking and Elon Musk would routinely bring it up in discussing the dangers of artificial intelligence. Others were skeptical. They found the story far-fetched, even by thought-experiment standards. It seemed, as The Economist wrote, a little too “silly” to be taken seriously.
I was long in the skeptic camp, but recently I’ve had a change of heart. Bostrom’s story, I would argue, becomes compelling when viewed not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?
“The intelligence of an AI model roughly equals the log of the resources used to train and run it,” OpenAI CEO Sam Altman declared a year ago. The important word here is “log.” As Donald MacKenzie explains in an insightful article on AI in the London Review of Books:
A logarithmic function, at least of the kind that is relevant here, is characterised by diminishing returns. The more resources you put in, the better the results, but the rate of improvement steadily diminishes.
To maintain a linear path of improvement in the performance of today’s neural-network-based AI models requires an exponential increase in resources. Ever larger inputs achieve ever smaller gains. But people like Altman remain absolutely committed to making those escalating resource investments, no matter the monetary or social cost. Because they believe that vast winner-take-all rewards will come to whichever company achieves a scale advantage in AI, they will devote all available resources—energy, water, real estate, data, people—to the pursuit of even a tiny scale advantage.
Elon Musk, having abandoned his earlier misgivings about AI, announced last week that he was merging xAI into SpaceX. The combined companies were “scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!” he declared. “In the long term, space-based AI is obviously the only way to scale.” It’s exactly what Bostrom predicted. The monomaniacs will not stop with the resources of the Earth. They’ll extend their plunder to the heavens. Everything is raw material.
This post is an installment in Dead Speech, the New Cartographies series on AI and its cultural and economic consequences.



Peering into the minds of megalomaniacs is not really possible, but they do at least partially reveal themselves in their messianic missions. Some might call them insane; others call them visionary. I doubt their projects are motivated by purely pecuniary interests. Some curiosity must exist to discover what engineering and technological problems can be solved, but the distinctly anti-human odor is evidence of individuals who have lost their humanity and are fundamentally alienated from normal human functions.
The paperclip scenario is a version of the slightly more colourful “gray goo” story — a phrase and idea derived from an 80s sci-fi novel and popularized by Ray Kurzweil, especially through Bill Joy’s “The Future Doesn’t Need Us” article in WIRED over 20 years ago.