Slashdot reported
the results of this year's Loebner Turing Tournament. There's a $100k for whomever can write a chatbot that fools 30% of human interacting with it that it's a human at a keyboard somewhere and not a program. Apparently, one team fooled 25% of the judges.
Slashdot commentors were quick to pounce on the judging and show up online versions of the contestants.
One comment:
It took me three questions before Elbot replied with a non sequitur and about five minutes before it started repeating answers. It didn't take me long to realise that it had no concept of context - every reply was a reply to what I had just said, and had no relation to the last-but-one thing I'd said. Some things that tripped it up:
* Asking 'why?' about anything.
* Trying to teach it a new word.
* Asking it the square root of minus two (odd, since last year one of the judges asked questions like this to all of the bots).
* Anything about religion.
That 25% of the judges thought it was human is quite alarming.
Fair enough. But imagine now if you adopted the premise that half the people you engage with on a daily basis are nothing more than scripted chatbots covered in meat. If you made it your mission to figure out who were the bots and who the humans, you'd alienate most the humans pretty quickly.
Which suggests that the most convincing feature a chatbot might have is a well-scripted sense of personal insult or irritation. Things like:
"Dude, what the hell are you talking about?"
Or:
"Why the fuck are you asking me?"
In fact, I wouldn't be surprised if that wasn't the secret ingredient in some of the more successful contestants. (That, and the careful distribution of typos and misspellings.) Nor would I be surprised if it's a feature they turn off in the online non-competitive version of their bots.