In Ovid’s Metamorphoses, the sculptor Pygmalion, a celibate by choice, sculpts a beautiful woman in ivory and falls in love with her. “He kisses it and feels his kisses are returned.” Nearly two thousand years later, in 1913, George Bernard Shaw uses Ovid’s story as the basis for his play Pygmalion, in which the phonetics professor Henry Higgins teaches the cockney guttersnipe Eliza Doolittle to speak the King’s English and in the process becomes bewitched by her. In 1956, Lerner and Loewe turn Shaw’s play into a celebrated musical, My Fair Lady, which in 1964 is adapted into a hit film starring Rex Harrison as Henry and Audrey Hepburn as Eliza. That same year, the MIT computer scientist and AI researcher Joseph Weizenbaum begins programming the first computer chatbot, which he names, as a joke, Eliza. He fails to foresee how his Eliza will prove just as seductive as her predecessors. In today’s Sunday Rerun, drawn from “A Thing Like Me,” the closing chapter of my 2010 book The Shallows, I tell the story of Weizenbaum, who, to his credit, did not fall for his creation.
It was one of the odder episodes in the history of computer science, yet also one of the more telling. Over the course of a few months in 1964 and 1965, Joseph Weizenbaum, a forty-one-year-old computer scientist at the Massachusetts Institute of Technology, wrote a software application for parsing written language, which he programmed to run on the university’s new time-sharing system. A student, sitting at one of the system’s terminals, would type a sentence into the computer, and Weizenbaum’s program, following a set of simple rules about English grammar, would identify a salient word or phrase in the sentence and analyze the syntactical context in which it was used. The program would then, following another set of rules, transform the sentence into a new sentence that had the appearance of being a response to the original. The computer-generated sentence would appear almost instantly on the student’s terminal, giving the illusion of a conversation.
In a January 1966 paper introducing his program, Weizenbaum provided an example of how it worked. If a person typed the sentence “I am very unhappy these days,” the computer would need only know that the phrase “I am” typically comes before a description of the speaker’s current situation or state of mind. The computer could then recast the sentence into the reply “How long have you been very unhappy these days?” The program worked, Weizenbaum explained, by first applying “a kind of template to the original sentence, one part of which matched the two words ‘I am’ and the remainder [of which] isolated the words ‘very unhappy these days.’” It then used an algorithmic “reassembly kit,” tailored to the template, that included a rule specifying that “any sentence of the form ‘I am BLAH’” should be “transformed to ‘How long have you been BLAH,’ independently of the meaning of BLAH.”
Weizenbaum named his program Eliza, after Eliza Doolittle, the cockney flower girl who, in George Bernard Shaw’s Pygmalion, learns to speak proper English under the tutelage of the haughty phonetics professor Henry Higgins. To make the simulated conversations a little more interesting, Weizenbaum also gave his artificial interlocutor a persona—that of a Rogerian psychotherapist. The choice, as he explained in his paper, was a pragmatic one. Using a technique developed by the psychologist Carl Rogers in the 1940s, Rogerian therapists pretended, in their conversations with patients, to have no understanding of the world. For the most part, they simply parroted their patients’ statements back to them in the form of banal, open-ended questions or comments. Knowing that the naïveté was a pose, the patients were free to attribute to their therapists “all sorts of background knowledge, insights and reasoning ability.” The Rogerian persona, Weizenbaum recognized, had “a crucial psychological utility” for Eliza: it suggested that the program’s vacuity masked some sort of real intelligence.
Weizenbaum provided a transcript of part of an exchange between Eliza and a young woman who had been an early tester of the program (the machine’s responses are in italics):
Men are all alike.
In what way?
They’re always bugging us about something or other.
Can you think of a specific example?
Well, my boyfriend made me come here.
Your boyfriend made you come here.
He says I’m depressed much of the time.
I am sorry to hear you are depressed.
It’s true. I am unhappy.
Do you think coming here will help you to be not unhappy?
I need some help, that much seems certain.
What would it mean to you if you got some help?
Perhaps I could learn to get along with my mother.
Tell me more about your family.
My mother takes care of me.
Who else in your family takes care of you?
My father.
Your father.
You are like my father in some ways.
What resemblance do you see?
You are not very aggressive but I think you don’t want me to notice that.
What makes you think I am not very aggressive?
You don’t argue with me.
Why do you think I don’t argue with you?
You are afraid of me.
Does it please you to believe I am afraid of you?
My father is afraid of everybody.
In commenting on his creation, which he acknowledged to be a modest and even slightly silly contribution to the fledgling field of natural-language processing, Weizenbaum observed how easy it is for computer programmers to make machines “behave in wondrous ways, often sufficient to dazzle even the most experienced observer.” But as soon as a program’s “inner workings are explained in language sufficiently plain to induce understanding,” he continued, “its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.” The program goes “from the shelf marked ‘intelligent’ to that reserved for curios.”
Weizenbaum, like Henry Higgins, was soon to have his equilibrium disturbed. Eliza quickly found fame on the MIT campus, becoming a mainstay of lectures and presentations about computing and time-sharing. It was among the first software programs able to demonstrate the power and speed of computers in a way that laymen could easily grasp. You didn’t need a background in mathematics, much less computer science, to chat with Eliza. Copies of the program proliferated at other schools as well. Then the press took notice, and Eliza became, as Weizenbaum later put it, “a national plaything.”
While he was surprised by the public’s interest in his program, what shocked him was how quickly and deeply people using the software “became emotionally involved with the computer,” talking to it as if it were an actual person. Users “would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them.” Even his secretary, who had watched him write the code for Eliza “and surely knew it to be merely a computer program,” was seduced. After a few moments using the software at a terminal in Weizenbaum’s office, she asked the professor to leave the room because she was embarrassed by the intimacy of the conversation. “What I had not realized,” said Weizenbaum, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
Things were about to get stranger still. Distinguished psychiatrists and psychologists began to suggest, with considerable enthusiasm, that the program could play a valuable role in actually treating the ill and the disturbed. In an article in the Journal of Nervous and Mental Disease, three prominent research psychiatrists wrote that Eliza, with a bit of tweaking, could be “a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists.” Thanks to the “time-sharing capabilities of modern and future computers, several hundred patients an hour could be handled by a computer system designed for this purpose.” Writing in Natural History, the prominent astrophysicist Carl Sagan expressed equal excitement about Eliza’s potential. He foresaw the development of “a network of computer therapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist.”
To converse with Eliza was to engage in a variation on the famous Turing test. But, as Weizenbaum was astonished to discover, the people who “talked” with his program had little interest in making rational, objective judgments about the identity of Eliza. They wanted to believe that Eliza was actually thinking. They wanted to imbue Eliza with human qualities—even when they were well aware that it was nothing more than a computer program following simple and rather obvious instructions. The Turing test, it turned out, was as much a test of the way human beings think as of the way machines think. In their Journal of Nervous and Mental Disease article, the three psychiatrists hadn’t just suggested that Eliza could serve as a substitute for a real therapist. They went on to argue, in circular fashion, that a psychotherapist was in essence a kind of computer: “A human therapist can be viewed as an information processor and decision maker with a set of decision rules which are closely linked to short-range and long-range goals.” In simulating a human being, however clumsily, Eliza encouraged human beings to think of themselves as simulations of computers.
The reaction to the software unnerved Weizenbaum. It planted in his mind a question he had never before asked himself but that would preoccupy him for many years: “What is it about the computer that has brought the view of man as a machine to a new level of plausibility?” In 1976, a decade after Eliza’s debut, he provided an answer in his book Computer Power and Human Reason. To understand the effects of a computer, he argued, you had to see the machine in the context of humankind’s past intellectual technologies, the long succession of tools that transformed how people think and altered their “perception of reality.” Such technologies become part of “the very stuff out of which man builds his world.” Once adopted, they can never be abandoned, at least not without plunging society into “great confusion and possibly utter chaos.” An intellectual technology, he wrote, “becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure.”
That fact, almost “a tautology,” helps explain how our dependence on digital computers grew steadily and seemingly inexorably after the machines were invented at the end of the Second World War. “The computer was not a prerequisite to the survival of modern society in the post-war period and beyond,” Weizenbaum argued; “its enthusiastic, uncritical embrace by the most ‘progressive’ elements of American government, business, and industry made it a resource essential to society’s survival in the form that the computer itself had been instrumental in shaping.” He knew from his experience with time-sharing networks that the role of computers would expand beyond the automation of governmental and industrial processes. Computers would come to mediate the activities that define people’s everyday lives—how they learn, how they think, how they socialize. What the history of intellectual technologies shows us, he warned, is that “the introduction of computers into some complex human activities may constitute an irreversible commitment.” Our intellectual and social lives may, like our industrial routines, come to reflect the form that the computer imposes on them.
What makes us most human, Weizenbaum had come to believe, is what is least computable about us—the connections between our mind and our body, the experiences that shape our memory and our thinking, our capacity for emotion and empathy. The great danger we face as we become more intimately involved with our computers—as we come to experience more of our lives through the disembodied symbols flickering across our screens—is that we’ll begin to lose our humanness, to sacrifice the very qualities that separate us from machines. The only way to avoid that fate, Weizenbaum wrote, is to have the self-awareness and the courage to refuse to delegate to computers the most human of our mental activities and intellectual pursuits, particularly “tasks that demand wisdom.”
In addition to being a learned treatise on the workings of computers and software, Weizenbaum’s book was a cri de coeur, a computer programmer’s passionate examination of the limits of his profession. The book did not endear the author to his peers. After it came out, Weizenbaum was spurned as a heretic by leading computer scientists, particularly those pursuing artificial intelligence. John McCarthy, one of the early AI pioneers and promoters, spoke for many technologists when, in a mocking review, he dismissed Computer Power and Human Reason as “an unreasonable book” and scolded Weizenbaum for unscientific “moralizing.” Outside the data-processing field, the book caused only a brief stir. It appeared just as the first personal computers were making the leap from hobbyists’ workbenches to mass production. The public, primed for the start of a buying spree that would put computers into every office, home, and school in the land, was in no mood to entertain an apostate’s doubts.
Thanks for the re-run. Recent news articles here in Australia have mentioned people using ChatGPT and the like as affordable, accessible forms of therapy, but I'd forgotten all about Weizenbaum.
It's interesting that this particular application of AI should go back to the beginning of the technology. What is it in the technology that makes it so seductive? How is it possible that even those who knew it was just running a bit of code fell under its spell?
Prof McLuhan offers us the myth of Narcissus, and myth is perhaps the best way to appreciate the strange ways we humans respond to technology. The following is a quote from ‘The Agenbite of Outwit’ (Location, 1963):
As Narcissus fell in love with an outering (projection, extension) of himself, man seems invariably to fall in love with the newest gadget or gimmick that is merely an extension of his own body. Driving a car or watching television, we tend to forget that what we have to do with is simply a part of ourselves stuck out there. Thus disposed, we become servo-mechanisms of our contrivances, responding to them in the immediate, mechanical way that they demand of us. The point of the Narcissus myth is not that people are prone to fall in love with their own images but that people fall in love with extensions of themselves which they are convinced are not extensions of themselves. This provides, I think, a fairly good image of all of our technologies, and it directs us towards a basic issue, the idolatry of technology as involving a psychic numbness.
(Quoted here: https://mcluhansnewsciences.com/mcluhan/2014/08/mcluhan-and-plato-4-narcissus/#fn-7386-5)
The 'numbness' McLuhan mentions, relating Narcissus to narcosis, refers to the numbing of the senses whereby people remain oblivious to the psychic and social effects of new technology. Idolatry is a well-chosen word - the veneration of something that we have made with our own hands.
Note that the words quoted above were written around the same time as Eliza was being programmed.