Is AI the Paperclip?
Scale at all costs.
In a paper published in 2003, the philosopher Nick Bostrom sketched out a thought experiment aimed at illustrating an existential risk that artificial intelligence might eventually pose to humanity. An advanced AI is given, by its human programmers, the objective of optimizing the production of paperclips. The machine sets off in monomaniacal pursuit of the objective, its actions untempered by common sense or ethical sense. The result, Bostrom wrote, is “a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.” It destroys everything, including its programmers, in a mad rush to gather resources for paperclip production.
Bostrom went on to refine his “paperclip maximizer” thought experiment in subsequent writings and interviews, and it soon became a touchstone in debates about AI. Eminences as diverse as Stephen Hawking and Elon Musk would routinely bring it up in discussing the dangers of artificial intelligence. Others were skeptical. They found the story far-fetched, even by thought-experiment standards. It seemed, as The Economist wrote, a little too “silly” to be taken seriously.
I was long in the skeptic camp, but recently I’ve had a change of heart. Bostrom’s story, I would argue, becomes compelling when viewed not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?
“The intelligence of an AI model roughly equals the log of the resources used to train and run it,” OpenAI CEO Sam Altman wrote a year ago. The important word here is “log.” As Donald MacKenzie explains in an insightful article on AI in the London Review of Books:
A logarithmic function, at least of the kind that is relevant here, is characterised by diminishing returns. The more resources you put in, the better the results, but the rate of improvement steadily diminishes.
To maintain a linear path of improvement in the performance of today’s neural-network-based AI models requires an exponential increase in resources. Ever larger inputs achieve ever smaller gains. But people like Altman remain absolutely committed to making those escalating resource investments, no matter the monetary or social cost. Because they believe that vast winner-take-all rewards will come to any company achieving superior scale in AI, they will devote all available resources—energy, water, real estate, data, chips, people—to the pursuit of even a tiny scale advantage.
Elon Musk, having abandoned his earlier misgivings about AI, announced last week that he was merging xAI into SpaceX. The combined companies were “scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!” he declared. “In the long term, space-based AI is obviously the only way to scale.” It’s exactly what Bostrom predicted. The monomaniacs will not stop with the resources of the Earth. They’ll extend their plundering to the heavens. Everything is raw material.
This post is an installment in Dead Speech, the New Cartographies series on AI and its cultural and economic consequences.



The monomaniacs promise the moon, the superintelligence that will "solve everything" very soon. But before they mange to spend $1.7 trillion (at least) on data centers, the bubble will burst and most people (and businesses and governments) will realize that "AI" is only the most recent extension of computing and computer-based automation and has nothing to do with intelligence.
The absurdity in Musk’s “scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!” makes me both horrified and stunned at the level of stupidity these so called ‘visionaries’ exhibit. I remember telling a colleague of mine that if you see yourself using ChatGPT (or AI) to anything beyond spell/grammar check or just to speed up your google search, then you know that it’s useful.. Most of these users are doing exactly that and nothing else.
It’s laziness, not revolutionary tech.