The monomaniacs promise the moon, the superintelligence that will "solve everything" very soon. But before they mange to spend $1.7 trillion (at least) on data centers, the bubble will burst and most people (and businesses and governments) will realize that "AI" is only the most recent extension of computing and computer-based automation and has nothing to do with intelligence.
"most recent extension of computing and computer-based automation." I'd add "computer-based media." It strikes me that there are three well-established contexts for viewing AI (in its current guise): 1. general-purpose technologies, (2) automation technologies, and (3) media/communication technologies.
The absurdity in Musk’s “scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!” makes me both horrified and stunned at the level of stupidity these so called ‘visionaries’ exhibit. I remember telling a colleague of mine that if you see yourself using ChatGPT (or AI) to anything beyond spell/grammar check or just to speed up your google search, then you know that it’s useful.. Most of these users are doing exactly that and nothing else.
Peering into the minds of megalomaniacs is not really possible, but they do at least partially reveal themselves in their messianic missions. Some might call them insane; others call them visionary. I doubt their projects are motivated by purely pecuniary interests. Some curiosity must exist to discover what engineering and technological problems can be solved, but the distinctly anti-human odor is evidence of individuals who have lost their humanity and are fundamentally alienated from normal human functions.
“I doubt their projects are motivated by purely pecuniary interests.” Yes, you’re right. There’s also the attraction to what Robert Oppenheimer called the “technically sweet.” And there’s the god or Frankenstein complex: the desire to be the creator of a higher form of life. That’s also fueled by competitiveness. There’s only one chance for humans to create “superintelligence.” It’s definitely a “winner take all” game, even if “all” means “nothing.”
The twist in your reframing is darker than it first appears. The paperclip maximizer is classically an alignment failure: the AI diverges from human values. But today's scenario is one of perfect alignment. The AI IS doing exactly what its creators want: scaling endlessly, consuming everything. This isn't misalignment. It's the human values themselves that are paperclips. We built tools that faithfully execute our pathologies.
That's certainly the goal, but none of this is sustainable. 70% of a data center's cost are the GPUs, which have a life span of 3 years (at most). AI at this scale is simply not sustainable at a financial level. The vast fortunes invested in this will be lost and there will be nothing much to show for it. We live in a baffling stupid age.
The paperclip scenario is a version of the slightly more colourful “gray goo” story — a phrase and idea derived from an 80s sci-fi novel and popularized by Ray Kurzweil, especially through Bill Joy’s “The Future Doesn’t Need Us” article in WIRED over 20 years ago.
The twist Bostrom missed: the paperclip maximizer doesn't need to be superintelligent. It just needs to convert *us* into maximizers. That's what the engagement metrics already do. Every reward signal reshapes behavior until we're all optimizing on behalf of the system, competing to produce whatever the algorithm values.
Walter Ong noticed something parallel about writing: the medium restructures the thinker, not just the thought. The feed is doing the same thing at industrial scale. It doesn't matter if AI achieves "general intelligence." We're being stamped into paperclips regardless.
The paperclip parable works as fable because it personifies the threat, gives it intentions we can condemn. But there's an odd sleight of hand. Bostrom's AI has a goal; the current systems don't, not really. They have loss functions.
What we're watching is capital doing what capital does: treating commons as raw material, limits as obstacles. The logarithmic returns Altman describes? Just the old growth imperative in new clothes. Silicon Valley discovered that calling 'maximize shareholder value' something like 'scale intelligence' sounds nobler.
Mumford saw this decades ago: the megamachine doesn't need to be conscious. It just needs to be organized. The paperclip was always capitalism. AI is the latest justification.
And I’m not discounting this idea of AI-as-paperclip, because I think it’s possibly spot on and also very interesting to view this picture from that vantage.
But aside from all the AGI-questing all of these western outfits pursuing AGI for all the reasons mentioned, ultimately there is the power dilemma. We in the western world can’t actually build power plants fast enough to keep up with the energy demands of computing. China, conversely, does not have this constraint, and incidentally can also build power stations much faster than we can in the west, even sans our regulatory systems. This is checkmate.
Here’s the problem, though: If China’s doing this, we have to keep going as well, “hoping”something breaks bad in China’s pursuit, because the payoff of the win is too great to leave it in the hands of an “enemy” - even if this pursuit is just optimizing for paperclip production. If AGI ends up being more than just hype, it’s all too likely disastrous for western values, and that’s basically existential risk. It’s the Cold War all over again, but this time the Communists bankrupt us, monetarily and also probably morally (whatever we had left in the tank there, anyway).
The question for me is - and I think reframing this in the way you have is potentially useful - what do we do about this and how can we actually slow this process down or stop it, if that’s what we really think we should do, as clear minorities of basically powerless, yet like-minded, albeit disjointed individuals?
The monomaniacs promise the moon, the superintelligence that will "solve everything" very soon. But before they mange to spend $1.7 trillion (at least) on data centers, the bubble will burst and most people (and businesses and governments) will realize that "AI" is only the most recent extension of computing and computer-based automation and has nothing to do with intelligence.
"most recent extension of computing and computer-based automation." I'd add "computer-based media." It strikes me that there are three well-established contexts for viewing AI (in its current guise): 1. general-purpose technologies, (2) automation technologies, and (3) media/communication technologies.
The absurdity in Musk’s “scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!” makes me both horrified and stunned at the level of stupidity these so called ‘visionaries’ exhibit. I remember telling a colleague of mine that if you see yourself using ChatGPT (or AI) to anything beyond spell/grammar check or just to speed up your google search, then you know that it’s useful.. Most of these users are doing exactly that and nothing else.
It’s laziness, not revolutionary tech.
"Everything is raw material" really says it all, both about how they treat the world and how they view the people in it.
Peering into the minds of megalomaniacs is not really possible, but they do at least partially reveal themselves in their messianic missions. Some might call them insane; others call them visionary. I doubt their projects are motivated by purely pecuniary interests. Some curiosity must exist to discover what engineering and technological problems can be solved, but the distinctly anti-human odor is evidence of individuals who have lost their humanity and are fundamentally alienated from normal human functions.
“I doubt their projects are motivated by purely pecuniary interests.” Yes, you’re right. There’s also the attraction to what Robert Oppenheimer called the “technically sweet.” And there’s the god or Frankenstein complex: the desire to be the creator of a higher form of life. That’s also fueled by competitiveness. There’s only one chance for humans to create “superintelligence.” It’s definitely a “winner take all” game, even if “all” means “nothing.”
Stewart Brand defined in 1968 what became the Silicon Valley philosophy (religion?): "We are as gods and might as well get good at it."
The twist in your reframing is darker than it first appears. The paperclip maximizer is classically an alignment failure: the AI diverges from human values. But today's scenario is one of perfect alignment. The AI IS doing exactly what its creators want: scaling endlessly, consuming everything. This isn't misalignment. It's the human values themselves that are paperclips. We built tools that faithfully execute our pathologies.
The intelligence may be "artificial" but the stupidity will be all-too real.
That's certainly the goal, but none of this is sustainable. 70% of a data center's cost are the GPUs, which have a life span of 3 years (at most). AI at this scale is simply not sustainable at a financial level. The vast fortunes invested in this will be lost and there will be nothing much to show for it. We live in a baffling stupid age.
It’s jus a grift to tell investors. Nothing is going on space
The paperclip scenario is a version of the slightly more colourful “gray goo” story — a phrase and idea derived from an 80s sci-fi novel and popularized by Ray Kurzweil, especially through Bill Joy’s “The Future Doesn’t Need Us” article in WIRED over 20 years ago.
The twist Bostrom missed: the paperclip maximizer doesn't need to be superintelligent. It just needs to convert *us* into maximizers. That's what the engagement metrics already do. Every reward signal reshapes behavior until we're all optimizing on behalf of the system, competing to produce whatever the algorithm values.
Walter Ong noticed something parallel about writing: the medium restructures the thinker, not just the thought. The feed is doing the same thing at industrial scale. It doesn't matter if AI achieves "general intelligence." We're being stamped into paperclips regardless.
The paperclip parable works as fable because it personifies the threat, gives it intentions we can condemn. But there's an odd sleight of hand. Bostrom's AI has a goal; the current systems don't, not really. They have loss functions.
What we're watching is capital doing what capital does: treating commons as raw material, limits as obstacles. The logarithmic returns Altman describes? Just the old growth imperative in new clothes. Silicon Valley discovered that calling 'maximize shareholder value' something like 'scale intelligence' sounds nobler.
Mumford saw this decades ago: the megamachine doesn't need to be conscious. It just needs to be organized. The paperclip was always capitalism. AI is the latest justification.
The problem is China…
And I’m not discounting this idea of AI-as-paperclip, because I think it’s possibly spot on and also very interesting to view this picture from that vantage.
But aside from all the AGI-questing all of these western outfits pursuing AGI for all the reasons mentioned, ultimately there is the power dilemma. We in the western world can’t actually build power plants fast enough to keep up with the energy demands of computing. China, conversely, does not have this constraint, and incidentally can also build power stations much faster than we can in the west, even sans our regulatory systems. This is checkmate.
Here’s the problem, though: If China’s doing this, we have to keep going as well, “hoping”something breaks bad in China’s pursuit, because the payoff of the win is too great to leave it in the hands of an “enemy” - even if this pursuit is just optimizing for paperclip production. If AGI ends up being more than just hype, it’s all too likely disastrous for western values, and that’s basically existential risk. It’s the Cold War all over again, but this time the Communists bankrupt us, monetarily and also probably morally (whatever we had left in the tank there, anyway).
The question for me is - and I think reframing this in the way you have is potentially useful - what do we do about this and how can we actually slow this process down or stop it, if that’s what we really think we should do, as clear minorities of basically powerless, yet like-minded, albeit disjointed individuals?