First Chatbot

First Chatbot
Who
ELIZA, Joseph Weizenbaum
What
First
Where
United States (Cambridge)
When
January 1966

The first computer program to simulate human conversation was ELIZA, developed by computer scientist Joseph Weizenbaum (USA, b. DEU) at the Massachusetts Institute of Technology between 1964 and 1966. Running on an IBM 7094 mainframe, ELIZA was designed to mimic the deliberately vague questioning style of a Rogerian psychoanalyst. The details of the program were published in January 1966.

Users interacted with ELIZA by typing what they wanted to say into a teletype machine. Their message would then be processed by the program, which was running on a mainframe computer elsewhere on MIT's campus. The computer's response would then be automatically typed out on the machine.

Eliza first scanned through the user's input looking for keywords, if it found one, it generated a response based on a set of rules associated with the keyword (i.e. "mother", "husband" or "depressed"), this response was modified by the context in which the keyword appeared. If there was no keyword or phrase, it would pick from a list of generic statements or questions.

Weizenbaum added more complexity to this basic set-up by assigning different weights to certain keywords or combinations of keywords, meaning that the range of outputs was much wider than if it simply matched keywords to outputs. He also added the ability to read what he called "minimal context", which made assumptions about the meaning of the content based on the occurrence of particular phrases ("you" followed by "are", for example, was assumed to be a statement about ELIZA, whereas "I" followed by "am" was interpreted as a statement about the user).

The whole program was extremely simple, around 200 lines of code in total, but it was surprisingly effective at engaging users. When he began to test it with students and colleagues, Weizenbaum was astonished by how quickly people started to treat it like a person. People would confess deep secrets or have moments of great personal insight thanks to the output of a few lines of code.

Weizenbaum described this phenomenon as the "ELIZA effect", arguing that it demonstrated a problem with that traditional "Turing test" definition of sentience. This definition, based on a remark made by British mathematician Alan Turing, posits that a computer can be considered to be intelligent when it can convincingly imitate a person in conversation. Weizenbaum noted that the human tendency to anthropomorphize things means that people would inevitably infer intelligence from even very simple imitations of speech.