Cookie Consent by Free Privacy Policy Generator

The mother of all language models: ELIZA and the ELIZA effect

Developed in 1966 by Joseph Weizenbaum, ELIZA was one of the first computer programs to simulate human conversation.

A screenshot of Eliza and ChatGPT on a timeline from 1966 to 2025

In the world of artificial intelligence, ELIZA is an icon. Developed in 1966 by Joseph Weizenbaum, ELIZA was one of the first computer programs to simulate human conversation. It was intended to demonstrate the possibilities of communication between man and machine and its functionality shows how simple algorithms can evoke profound human reactions. The "DOCTOR" mode simulated the conversations of a psychotherapist using a simple method: ELIZA asked questions or gave answers, often formulated according to the principles of client-centred psychotherapy, by recognizing key words and returning phrases based on them. This technique led to a user experience in which users often had the feeling that they were really talking to an understanding dialog partner.

User: "I have a problem with my father."

ELIZA: "Tell me more about your family!"

One remarkable aspect of ELIZA is the so-called Eliza effect. This effect describes the phenomenon that people attribute human characteristics such as empathy or understanding to a computer program, even though the programming is only based on simple, rule-based interactions. Weizenbaum found that users' reactions were often characterized by a deep emotional connection, despite the machine's apparent lack of real understanding or intelligence. This led to alarming insights into human nature and the relationship between man and machine.

Screenshot of ELIZA (source: Wikipedia)

The basic technology behind ELIZA is now being further developed in many modern language models and chatbots. Today's systems, like those from OpenAI and Google, use complex neural networks to guide the dialog and even integrate contextual information over longer conversations. These language elements are much more sophisticated and can act in many ways more useful and engaging than ELIZA. Still, the question remains: aren't we experiencing a new form of the Eliza effect through these technologies?

Just as ELIZA gave an impression of communication, modern LLMs can also mislead us into believing a misleading depth in their interactions. While the ability of such models to mimic human speech is remarkable, we must be aware that they do not (yet) possess true emotion or understanding.

In conclusion, ELIZA is not only a historical relic of computer science history, but also teaches an important lesson in the field of human-machine interaction. The Eliza effect shows us how quickly we humans tend to ascribe human characteristics to machines and prompts us to reflect on how we use modern technologies - and whether we may wrongly view these technologies as "intelligent". This raises the question: At what point is a machine actually intelligent?


Sources:

ELIZA - Wikipedia
ELIZA effect - Wikipedia