6 Comments
User's avatar
Andrew N's avatar

This article reminded of the Frank Herbert quote, "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

The Nehemiah Option's avatar

I love substack because of gems like this. Absolutely fantastic work

T. Scott Plutchak's avatar

When Musk initially sent his "5 things last week" memo, I saw a number of outraged commenters frothing, "And who does he think will read all of the responses?" It seemed obvious that he wasn't expecting any human to read them.

Susannah Schild's avatar

Thank you for this wonderful article. Keep up the great work!

Vladimir Supica's avatar

This text is a masterclass in rhetorical slippage. It weaves a tapestry of anxieties, biological, nuclear, and political,to argue that Artificial Intelligence is merely a magnification of its creators' egos. However, the author commits the very error he seeks to critique: Preformationism.

By insisting that AI is nothing more than a vessel for the "masculine energy" or the "small, callow soul" of a tech oligarch, the author ignores the defining characteristic of modern generative systems: Emergence.

Carr opens with Leeuwenhoek’s discovery of "animalcules" to mock the ego of Silicon Valley "tech bros," likening them to 17th-century biologists who believed a tiny, fully-formed human resided in the sperm head. The irony is that the author is the modern preformationist. They view AI as a static, deterministic script—a "homunculus" of Elon Musk’s will—completely ignoring the "fertilization" process of the training data. Generative AI is not a "bulbous head" containing a fixed worldview. It is a probabilistic engine of infinite contingency. When a model ingests the internet (the "fertilization"), it interacts with the collective unconscious of the entire human species, not just the "intent" of a single engineer. To claim AI is just a "projection of masculine energy" is to mistake the container (the GPU cluster) for the contents (the sum of human knowledge). The author treats AI as a closed loop of ego, when linguistically and mathematically, it is an open loop of hyper-connectivity.

The post relies heavily on the "Von Neumann Architecture = Nuclear Chain Reaction" trope. This is a linguistic sleight of hand designed to bypass logical scrutiny by triggering existential dread.

Equating the "chain reaction" of memory addresses with the "chain reaction" of fissile material is a category error. Entropy and destruction (tearing matter apart). Negentropy and information creation (weaving data together). Dyson’s observation that numbers became "executable" was not a sin; it was the birth of semantic agency. For the first time, language could do things rather than just describe things. The author frames this as "unimaginable evil" lurking in the dark corners of Google, a superstition reminiscent of medieval peasants fearing the printing press. The "wealth and energy" accumulation Dyson predicted isn't a sign of sinister intent; it is the metabolic cost of a new, planetary-scale cortex. Intelligence requires energy.

Nicholas Carr treats Norbert Wiener’s fear of a "bureaucracy-in-a-box" as a prophecy of doom. It explicitly attacks the "DOGE" initiative (efficiency via AI) as a totalitarian Leviathan.

The author suffers from a nostalgia for inefficiency. The "human" bureaucracy the author defends, the "know-what" of the civil servant, is historically the source of the very totalitarianism Wiener feared. Human bureaucrats are prone to bribery, cognitive bias, fatigue, and tribalism.

A "governing machine" (or AI-assisted administration) offers the first possibility of Algorithmic Due Process. Code can be audited; the "motives" of a corrupt official cannot. The text argues computers have "know-how" (technique) but lack "know-what" (purpose). This is a philosophical antique. In Large Language Models, "know-how" and "know-what" collapse into each other. By understanding the relationships between billions of human concepts, the AI simulates a "know-what" that is often more comprehensive and less biased than any single human's moral compass.

Carr concludes with the story of the engineer who loved the mechanism of the player piano more than the music, implying technocrats care about the "tech" more than the "human."

This is a false dichotomy. In the era of AGI, the mechanism IS the music. We are not building player pianos to play the same old tunes (human history) faster. We are building a synthesizer capable of composing symphonies that the biological mind is physically incapable of conceiving. The author wants to preserve the "human touch", even if that touch is slow, erroneous, and incapable of solving complex poly-crises like wars, sickness or space travel. The "Intellectual" stance is not to cower before the "monster" of complexity, but to recognize that we are evolving from being the Soloist (human-centric worldview) to the Conductor of a much larger, synthetic orchestra. The "soul" Turing feared is not missing; it is just distributed across a billion parameters, too vast for the author’s humanist lens to perceive.

Dan Vallone's avatar

Really interesting thread; thank you for surfacing and connecting these articles and stories.