If, as Marx argued, capital is dead labor, then the products of large language models might best be understood as dead speech. Just as factory workers produce, with their “living labor,” machines and other forms of physical capital that are then used, as “dead labor,” to produce more physical commodities, so human expressions of thought and creativity—“living speech” in the forms of writing, art, photography, and music—become raw materials used to produce “dead speech” in those same forms. LLMs, to continue with Marx’s horror-story metaphor, feed “vampire-like” on human culture. Without our words and pictures and songs, they would cease to function. They would become as silent as a corpse in a casket.
The generative-AI strategy is in one sense just a new variant on the central business strategy of the online economy. Whatever else the internet may be, it is a vast repository of human speech, digitized and machine-readable, that can be put to use by businesses to produce goods and services. For internet companies, speech becomes “user-generated content,” a private business asset. What makes the strategy so powerful, and so lucrative, is that all that digitized speech can be used without payment to its creators. It’s there for the taking. Even the earliest online operations, like the WELL and America Online, turned users’ speech into content that could be sold back to them. Google went further with its search engine. Not only did it “spider” all the world’s web pages, sucking in their content to generate its own content (in the form of search results and snippets), but it also realized that one overlooked element of user-generated content—hyperlinks—held enormous latent value. Each link was a marker of human judgment and human attention. Google built its fortune on the collection and parsing of people’s scribblings and codings.
With the arrival of the social web, or “Web 2.0,” social media companies pushed the scheme still further. To create huge pools of free content to distribute through their proprietary networks, they built complex, psychologically sophisticated systems that encouraged people to speak — and speak and speak and speak — in a variety of digital forms. Then the companies claimed ownership of all that speech. Back in 2006, I explained the essence of this media model through an analogy to agriculture’s sharecropping model. Friendster, MySpace, Facebook, and all the other social networks that emerged in their wake were the web’s plantation owners, controlling vast tracts of internet real estate. They lent each of their member-tenants a little plot of virtual land, along with a set of software tools, to cultivate an online identity, and then they reaped the monetary value of the resulting content by affixing advertisements to it.
One of the fundamental economic characteristics of Web 2.0 is the distribution of production into the hands of the many and the concentration of the economic rewards into the hands of the few. It’s a sharecropping system, but the sharecroppers are generally happy because their interest lies in self-expression or socializing, not in making money, and, besides, the economic value of each of their individual contributions is trivial. It’s only by aggregating those contributions on a massive scale — on a web scale — that the business becomes lucrative.
Some would argue that this system was exploitative from the start — that the companies essentially stole people’s public speech and used it to spin private profits. Technology opened a hole in copyright law, and internet firms rushed through it before anyone else, including lawmakers and judges, knew what was going on. Others would argue that it was all fair trade. Consumers got a valuable and fun new means of communication, and they happily, if not altogether knowingly, paid for it with their speech (and their attention).1 There’s truth in both views.
Large language models give the business model a new, and very strange, twist. OpenAI and its LLM-building competitors claim the entirety of human culture — or at least that part of it that can be rendered in digital form — as their raw material. They use living speech not as content for consumer consumption but as content for machine consumption — as inputs for the manufacture of artificial speech. By automating formerly human acts of expression, they’ve created a vast new supply of speech that, while dead, is understandable, meaningful, and useful to people. The artificial speech, output on demand in real time, becomes another source of cheap media content — a seemingly infinite source, in fact.
But the dead speech serves another purpose, too — a more intimate one. Individuals can use the artificial speech as a cheap substitute for their own speech. Rather than laboring at writing a term paper or a business memo or a wedding toast or a sermon, a person can simply use the machines to produce the required content (refining or personalizing it a bit, perhaps, with a few tweaks). It’s fast. It’s effortless. And, for the most part, it’s cost-free. Dead speech becomes a reasonable substitute for living speech.
In his book Non-things, the philosopher Byung-Chul Han draws a distinction between two styles of reading: the pornographic and the erotic. The pornographic reader “is looking for something to be uncovered.” He wants to get to the point, as expeditiously as possible. The erotic reader takes pleasure in the act of reading itself. He “lingers” with the words. “The words are the skin, and the skin does not enclose a meaning.” I would broaden Han’s distinction to describe perception in general. The pornographic mind is concerned only with what can be made explicit, what can be turned into information. It seeks to pierce the obscuring veils of mystery and wonder, beauty and ambiguity, to get to the gist of the matter. The erotic mind likes the veils. It sees them not as obscuring but as pleasurable and even revelatory.
The mind of the LLM is purely pornographic. It excels at the shallow, formulaic crafts of summary and mimicry. The tactile and the sensual are beyond its ken. The only meaning it knows is that which can be rendered explicitly. For a machine, such narrow-mindedness is a strength, essential to the efficient production of practical outputs. One looks to an LLM to pierce the veils, not linger on them. But when we substitute the LLM’s dead speech for our own living speech, we also adopt its point of view. Our mind becomes pornographic in its desire for naked information.
The question is: how far will we go in making that exchange? It would be foolish to suggest that dead speech will supplant living speech in all cases. Automation has its limits. Just as there are qualities of human labor that remain well beyond the reach of machines, so there are qualities of human speech that chatbots are unlikely to ever replicate. But what exactly are those qualities, and how important are they to us, really? To put it into the coarse terms of business, will those distinctively human qualities survive competition with labor-saving alternatives? Many of us like to believe that self and speech are deeply, inextricably intertwined. The self emerges through acting in the world, but it also emerges through speaking in the world. That proposition is about to be put to the test. We may discover that dead speech is sufficient for our purposes.
The state of affairs became more complicated when a set of users succeeded in building sizable audiences themselves. These “influencers,” as we came to call them, started generating content not for socializing but for commercial gain. The platforms adapted. In an extension of the sharecropping model, they agreed to share the ad revenues that the influencers’ content generated.
Mr Carr, sad to see Rough Type go, but glad to see you here.
Writing this brief note to show my appreciation for your writing, especially The Shallows. I've read the book at least half-dozen times and, like all works of substance, each time through reveals something new. Your writing convinced me to read McLuhan and that changed how I see, well, just about everything.
Looking forward to the new book and reading more of what you may post here. If you happen to still live in Boulder (believe I read that in an interview somewhere) I'd always be happy to show my gratitude with your beverage of choice.
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared that the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, theorgy porgy, and the centrifugal bumblepuppy. - Amusing Ourselves to Death, Neil Postman