AI bots

Bots go back more than 65 years. In 1950 Alan Turing published Computing Machinery and Intelligence. A summary by Jack Hoy can be found here.

But Turing's discussion of intelligent machinery did not begin in 1950. It goes further back, to the 1936 Turing machine modelling the human mind, and the developments which flowed out of wartime work at Bletchley Park. This model led to the development of chatbots that came close to passing the Turing Test.

The Turing Test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

ELIZA, coded by Joseph Weizenbaum in the mid-sixties, simulated a Rogerian psychotherapist by rephrasing many of the patient's statements as questions and posing them to the patient. It worked by simple pattern recognition and substitution of key words into canned phrases. It was so convincing, however, that there are many anecdotes about people becoming very emotionally caught up in dealing with ELIZA, a phenomenon called the {!Eliza Effect:The Eliza Effect is the tendency of humans to attach associations to terms from prior experience. For example, there is nothing magic about the symbol “+” that makes it well-suited to indicate addition; it's just that people associate it with addition. Using “+” or “plus” to mean addition in a computer language is taking advantage of the ELIZA effect.}}. Eliza has a large progeny.

In the 21st century, versions of these programs (now known as “chatterbots”) continue to fool people. And they come in malignent and benign forms. You can have a safe conversation with cleverbot, but not with CyberLover, that's malware. Have a look at these benign bots involved in a chatbot battle in 2012.

In 2014 the chatbot named “Eugene Goostman” became the first computer to pass the Turing test. That “success” is misleading, and exposes the Turing test's flaws. The bot gamed the system by claiming to speak English as a second language, and by assuming the persona of a 13-year-old boy, who would dodge questions and give unpredictable answers.