Eliza Effect

A term used to discuss progressive artificial intelligence. It is the idea that people may falsely attach meanings of symbols or words that they ascribe to artificial intelligence in technologies.

Many attribute the term “ELIZA effect” to the ELIZA program written by Joseph Weizenbaum in the mid-1960s. ELIZA was one of the first examples of “chatterbot” technologies that came close to passing a Turing test – that is, to fooling human users into thinking that a text response was sent by a human, not a computer. Many chatterbots work by taking in user phrases and spitting them back in forms that look intelligent. In the case of ELIZA, Weizenbaum used the concept of a “Rogerian psychotherapist” to provide text responses: for instance, to a user input “My mother hates me,” the program might return: “Why do you believe your mother hates you?”

The tendency of humans to attach associations to terms from prior experience. For example, there is nothing magic about the symbol “+” that makes it well-suited to indicate addition; it’s just that people associate it with addition. Using “+” or ‘plus’ to mean addition in a computer language is taking advantage of the ELIZA effect.

The results of these programs can seem startlingly intelligent, and were especially impressive for the time, when humans were first engineering AI systems.

The ELIZA effect can be useful in building “mock AI-complete” systems, but can also mislead or confuse users. The idea may be useful in evaluating modern AI systems such as Siri, Cortana and Alexa.

©RGLN3, LLC 2022

KOLOR // Artificial Industrialist

KOLOR Looks Forward to Building With You!

Sending

Log in with your credentials

Forgot your details?