Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the gd-system-plugin domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the kleo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
ELIZA Effect - Chum.Alum.World

Eliza Effect

A term used to discuss progressive artificial intelligence. It is the idea that people may falsely attach meanings of symbols or words that they ascribe to artificial intelligence in technologies.

Many attribute the term “ELIZA effect” to the ELIZA program written by Joseph Weizenbaum in the mid-1960s. ELIZA was one of the first examples of “chatterbot” technologies that came close to passing a Turing test – that is, to fooling human users into thinking that a text response was sent by a human, not a computer. Many chatterbots work by taking in user phrases and spitting them back in forms that look intelligent. In the case of ELIZA, Weizenbaum used the concept of a “Rogerian psychotherapist” to provide text responses: for instance, to a user input “My mother hates me,” the program might return: “Why do you believe your mother hates you?”

The tendency of humans to attach associations to terms from prior experience. For example, there is nothing magic about the symbol “+” that makes it well-suited to indicate addition; it’s just that people associate it with addition. Using “+” or ‘plus’ to mean addition in a computer language is taking advantage of the ELIZA effect.

The results of these programs can seem startlingly intelligent, and were especially impressive for the time, when humans were first engineering AI systems.

The ELIZA effect can be useful in building “mock AI-complete” systems, but can also mislead or confuse users. The idea may be useful in evaluating modern AI systems such as Siri, Cortana and Alexa.

©RGLN3, LLC 2022

KOLOR // Artificial Industrialist

KOLOR Looks Forward to Building With You!

Sending

Log in with your credentials

Forgot your details?