The "User-Generated Content" Ruse
The feed is the content.
Big social media companies are facing hundreds of personal-injury lawsuits claiming that their platforms have harmed people, particularly kids. Lawyers for the plaintiffs, which include individuals, states, and school districts, are modeling the suits on the successful litigation against cigarette companies at the end of the last century. Should the social media companies lose the suits, the first of which began this week in Los Angeles, they would face not just massive payouts but also the prospect of extensive new regulatory controls on their businesses, just as tobacco companies did.
The internet giants have armies of lawyers, and they’re spending millions to block the suits. They claim, as they always have in the past, that they’re shielded from such litigation by the 1996 Communications Decency Act. As the Wall Street Journal writes, in an editorial sympathetic to the companies, “The first problem with these cases is that Section 230 of the 1996 Communications Decency Act says internet platforms can’t be held liable for user-generated content.”1 But that old argument no longer holds water. The content produced by social media companies today is anything but “user-generated.” To think otherwise is to misunderstand how social media operates —and to misinterpret the scope of Section 230.
In 1996, when Congress passed the Communications Decency Act,2 the big internet companies were internet service providers, or ISPs. Their role was limited to providing customers with access to the net, through, usually, dial-up connections over telephone lines. The ISPs acted as common carriers, their role limited to the transmission of information that was created by others — a role similar to that of traditional telephone companies or even the post office. Just as it would have been unfair to hold a mailman liable for the content of the letters he delivered to people’s mailboxes, so it would have been unfair to hold ISPs liable for the content of the emails and web pages they delivered to people’s computers. Section 230 provides internet carriers with a safe harbor from litigation so long as they restrict themselves to transporting data and do not act as “publisher or speaker” of the content they deliver:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Back in the early days of social media, it could be argued that Section 230 still applied. When Facebook started up in 2004, for instance, it provided its users with templates for inputting and organizing personal profiles and messages, but its main role was to connect people through an online network so they could share the content they created. The users were the speakers and the publishers of the content. Facebook was the carrier of the content.
That all changed in 2006 when Facebook introduced its News Feed. The users no longer controlled what they saw when they logged on to the network; they now saw a “feed” of information that was controlled by the algorithms Facebook wrote. The company was no longer just a carrier of content. It had taken on an explicitly editorial role. Like the editors at newspapers or the producers at TV networks, it selected and arranged the information that its users saw. The users had become an audience for Facebook’s production.
The story of social media ever since has been a story of the refinement of feeds as a media product aimed at capturing and holding an audience. The platforms have invested billions of dollars in designing those feeds—what they contain, how they look, how they work—to make them as “engaging” as possible. To argue that the companies are still in the business of transmitting “user-generated content” is absurd. Saying that a social-media feed is the product of users is like saying that a hot dog is the product of cows.
The companies are not common carriers anymore; they’re media businesses. Yes, users still contribute posts and comments—though even those, in today’s era of influencers, creators, and AI, are often subsidized and actively shaped by the companies—but the essential content of social media is now the feeds produced by the platforms, not the individual messages posted by users. Go to Instagram and scroll through your feed. It’s obvious that what you’re experiencing is not discrete bits of user-generated content. It’s an elaborate, finely tuned media production manufactured by Instagram for an audience of one: you. The same goes for YouTube, X, TikTok, Facebook, Snapchat, Substack Notes, and, with a few exceptions, all the rest.
The feed is the content, and the social media company is its publisher. Period.
The question of whether social media companies should be held liable for harming people is a legally complex one, which would best be answered through courts of law. And that’s what should happen. Let the plaintiffs make their case, and let the defendants defend themselves. Section 230’s safe harbor doesn’t apply. Social media companies are, like other media companies, in the content-production business, and they’re responsible for their programming.
Section 230 makes no mention of “user-generated content.” That phrase didn’t come into common usage until the arrival of the social web several years later.
The Act ended up being thrown out as unconstitutional by the courts. Only Section 230 survived.


