17 Comments
User's avatar
Sean Palmer's avatar

Some really great writing in here. Hope there is some future book in the works. It would be a shame to lose in the ether that is the internet.

Peter Wiley's avatar

At a recent departmental gathering, my wife spoke with a colleague who described how AI made the task of creating a professional conference paper proposal, subsequently accepted, so much less time consuming. Apparently she had a hard time understanding what my wife objected to about using AI for such a purpose. The colleague was perfectly happy to think of the AI enhanced proposal as her own work. She has been compressed.

Baird Brightman's avatar

I'll see that "intelligence is compression" and raise it with "intelligence is expansion".

Lovely writing, Nicholas. Looking forward to reading more! 👏

Stregoni's avatar

Apologies, but are "Luhn" (name referred to at the beginning of this article) and "Lund" (name referred to in the paragraph "Artificial Generalizing Intelligence") supposed to be different people?

Nicholas Carr's avatar

No, just a typo. Fixed. Thanks.

kd's avatar

Two random thoughts.

There's compression and compression. The other one, that's kind of lurking in the article, while the first one, statistical, has the main stage, is more akin to providing a new perspective, new set of 'principles' you use to condense the meaning. It could be argued that it only exists within the area of subjectivity but I think it can have a claim to objectivity too (could AI come up with General Relativity theory when it didn't exist?).

The other one is that the limited exposure to AI I have had makes me really scared of how much content it can produce in mere seconds. We will be drowning in a deluge of content that no one will be able to process (or enjoy). Productivity ad absurdum. So much for compression.

Nicholas Carr's avatar

Good points. The question about general relativity is a crucial one. It certainly seems that today's AI enthusiasts would answer "yes" — that the model will be capable of generating genuine insight rather than just rehash. We'll see.

Stregoni's avatar

This article reminded me of the song "Tom's Diner" by Suzanne Vega, a recording known for its digital incompressibility.

Zander Arnao's avatar

Really good song

Vladimir Supica's avatar

This text, is a beautifully written piece of romantic humanism. However, from a rigorous linguistic and computational perspective, it relies on a series of category errors, dualistic fallacies, and a misunderstanding of what "compression" actually entails in an information-theoretic sense.

Nicholas Carr argues that modern LLMs are merely "brute-force" extensions of Hans Peter Luhn’s 1950s "Auto-Abstracter." This is a fundamental misunderstanding of the qualitative shift that occurs with scale and architecture. Luhn’s method was statistical extraction: counting word frequencies to identify "significant" sentences. It was syntax without semantics. Modern Transformers operate via semantic representation. When an LLM processes text, it does not merely count words; it maps them into a high-dimensional vector space where the relationships between concepts are preserved.

Nicholas Carr claims GPTs work with a "fleet of backhoes" compared to Luhn’s "trowel." This analogy fails. A backhoe is just a larger trowel. A Transformer is a fundamentally different tool—it is the difference between a lookup table and a neural network. The "brute force" of data allows the model to learn the underlying probability distribution of language itself, which is functionally indistinguishable from understanding syntax and semantics.

Further on, he mocks Ilya Sutskever’s statement, "Intelligence is compression," calling it a "grotesque oversimplification." The author interprets "compression" colloquially (making things smaller/shorter) rather than mathematically (finding the optimal representation of a pattern).

In the context of Kolmogorov complexity, to perfectly compress a dataset (or a reality) is to understand the algorithm that generated it. You cannot compress a sequence of data, like a novel or a codebase, without building an internal model of the logic, grammar, and intent behind it. If an AI can predict the next token in a complex philosophical treatise (compression), it must implicitly understand the philosophical arguments being made. The author treats compression as a loss of fidelity. In high-level intelligence, compression is the gain of insight. E = mc² is the ultimate compression of the physical universe. It is not "reductive" in a negative sense; it is an elegant encoding of reality. AI does not "boil down" distinctiveness; it encodes the patterns that allow distinctiveness to exist.

The text invokes a medieval theological distinction between "servile arts" (doing things) and "free arts" (contemplation), classifying AI as a "servile artist." This is a rhetorical sleight of hand rooted in aristocratic nostalgia. This argument presupposes that "useful" work (economic value) is inherently anti-intellectual. This is a fallacy that dates back to Plato denigrating engineers.

By handling the "servile" aspects of cognition (summarizing, sorting, coding boilerplate, data retrieval), AI actually enables the "free arts." The author complains about his "sweatshop" job writing abstracts. AI eliminates that sweatshop, theoretically freeing the human for the "extravagant" thinking that he prizes prizes. Defining AI as "servile" because it is useful is a circular argument. The human brain evolved primarily for "servile" tasks (survival, tool-making, social maneuvering). "Contemplation" is an emergent property of that survival machinery. If AI excels at the servile, it is climbing the same evolutionary ladder we did, only faster.

Carr uses Wallace Stevens’ poem to argue that human art is "extravagant" (wandering off course) and "incompressible," whereas AI is bound by rules. The poem is composed of words. Words are symbols, discrete units of data. The "feeling"he describes is triggered by the arrangement of these symbols. If the poem were truly incompressible (random), it would be white noise. It has meaning precisely because it has structure (syntax, meter, rhyme, semantic associations).

The claim that AI cannot "wander" is factually incorrect. High "temperature" settings in LLMs allow them to sample lower-probability tokens, generating metaphors and "hallucinations" that are mathematically identical to "extravagance" or "creativity." He conflates "mystery" with "magic." Just because we value the subjective experience of the poem does not mean the poem exists outside the realm of information processing. The "gold-feathered bird" is a symbol. AI handles symbols proficiently. The author is fetishizing the biological substrate (the brain) over the process (the mind/intelligence).

The Carr accuses AI proponents of the "neuromachinic fallacy", projecting machine logic onto the brain. The reverse is actually true: The author is committing the Bio-Essentialist Fallacy. He assumes that because the brain is biological, its processes must be mystical, non-computational, and fundamentally different from "data processing." Neuroscience increasingly suggests the brain is a prediction engine that minimizes surprise (Free Energy Principle), which is exactly what "compression" and generative models do.

The text is a defense of human exceptionalism. It argues that there is a "ghost in the machine", a soul, a "mere being", that data cannot capture.

However, from an intellectual and linguistic standpoint:

Meaning is relational (vector space), not just counting (Luhn).

Compression is the extraction of truth from noise.

Creativity is a probabilistic walk through latent space, something AI performs natively.

Carr misses the irony: By writing an essay to "distill" his complex feelings into text (a lossy compression format) to be read by others, he is engaging in the exact act of information processing he claims to despise.

Zander Arnao's avatar

But what about GPTs used in combination with the human mind? When they are framed as substitutes, intelligence is absolutely reduced to something santized and rule-bound. But surely if complemented by the human mind - through some excellent prompting - the synthesis of the two opens new pastures for expression and imagination?

Nicholas Carr's avatar

Yes, I agree that, in its application as a tool, AI can in some circumstances be an aid to thought and imagination. As I wrote in an earlier post, "The Myth of Automated Learning," AI is an automation technology that like other automation technologies be used to amplify skill or erode it (skill in this case being intelligence or imagination). But I think we need to be realistic about how it is actually being used (particularly by students) — and I think the evidence is pretty clear that it's being used as a substitute for the work of the mind: reading, writing, thinking, imagination, etc.

Zander Arnao's avatar

That's a good point about how AI is *actually* being used to substitute for human intelligence in many cases (e.g., teenagers in schools). I do wonder, though, if in the long run social systems will adapt to screen out/disincentivize "substitutional" uses of AI, as the outputs are likely to be of lower quality than outputs created by workers who use AI to complement their skills and knowledge. For instance, I wonder if essays written by students who invest more of their time/energy on an AI-generated foundation will tend to receive higher grades than ones where humans were less in the loop during writing.

Andrew N's avatar

Great article, I think you would enjoy Bernardo Kastrup’s book, Analytical Idealism in a Nutshell that talks about similar themes. Along the lines of the map is not the territory and the map does not produce the territory which is much of the AI premise.

Nicholas Carr's avatar

Thanks. I'll check it out.

Rrrrrr's avatar

El ser de la máquina es máquina.

Así, ser máquina alcanza para 'pensar' como máquina.

("Zapatero a tus Zapatos")

Nicholas Carr les canta, les recita, y se las pinta a los ilusos de la inminente 'singularidad'. Todo lo cual es muy importante pues la tarea de la construcción de la AGI sí que hay que emprenderla pronto (ver porqué la urgencia en Stanislaw Sem, Summa Technologiae)