6 Comments
User's avatar
George Hart's avatar

Bravo, and thank you! You've shared a concise and crystal clear framework for making sense of what's happened with the intrusion of mechanization into meaning-making. We can all ask, why did this intrusion happen, and why now?

Expand full comment
Lantern Light Workshop's avatar

And yet many seem very happy with this, a sign of how lazy we are.

Expand full comment
Rob Nelson's avatar

This articulation of the problem does two things that are largely absent from the AI discourse. It makes clear that LLMs, foundation models, or whatever we want to call these machines, have a history. And, it makes clear that history includes critics writing to understand the social problems created when we use computational algorithms to generate human-like language and cultural artifacts.

Superbloom is waiting for me at my local bookstore. I can't wait to get my hands on it.

Expand full comment
Men's Media Network's avatar

Trying to imaging the OSI model with no humans anywhere in the loop. 😱 YouTube is wall to wall with AI generated educational and informational videos with human sounding audio. It often gives itself away with stupid errors such as mispronouncing proper names and skipping over apostrophes and pronouncing contractions as two separate words.

Expand full comment
Tanner Harms's avatar

Nice article! I especially appreciated the breakdown of messages to speech, editorial, and transmission. However, despite having invaded these three domains, I still don’t think that machines ascribe meaning. Rather, they curate it. Even the most advanced AIs only synthesize data which, somewhere down the chain, has been evaluated (that is, given meaning) by people.

Expand full comment
Bruce Watson's avatar

Long before machines entered the picture, people communicated orally. Message made, message received. No filter or editor. That is the oral culture social media returns us to, one dominated by rumor, innuendo, and fear.

Expand full comment