The Turing Test

robertalopez

robertalopez

Posted on July 23, 2020

The Turing Test

Nowadays it seems like artificial intelligence has transitioned from a science-fiction fantasy to something most people actually interact with everyday. Siri and Amazon's Alexa seem to be the gold-standard; they can communicate, complete tasks, and even recognize distinct voices. Without a doubt, A.I. has seen complete commercial success, but it still fails to pass one of its original tests.

Origins of the Turing Test

In the 1950’s, the British mathematician and computer scientist, Alan Turing proposed a simple test to determine if a computer could ever exhibit true human intelligence; he proposed that if a human could not distinguish a computer’s behavior from another humans, then artificial intelligence arrived at the level of human intelligence. To date, no computer or robot has been able to fully pass the Turing Test.

Why is is so difficult?

The most common form of attempt consists of a robot designed to simulate human interaction/communication. While there has been A.I that can ask questions, answer questions, tell jokes, etc, they all fail for a seemingly simple reason: they don't sound human. What's missing in the development of these A.I. that inhibit their ability to communicate at a human level.

Alt Text

To put it simply, there is a lot of linguistic and cognitive research behind the problems in the design of A.I which suggest that humans are just still too dumb to understand all the mechanisms of language themselves. Teaching it to a computer is, therefore, completely out of the question for now.

The Closest Attempts

Alt Text

In 2001, a chatbot named Eugene Goostman, modeled after a thirteen year old Ukrainian boy, attempted the Turing Test. Eugene’s approach was not to exhibit perfect grammar and semantic knowledge but to make the same mistakes an average boy would make in conversation. It would make spelling mistakes, admit to not knowing things, and use a very limited vocabulary. Roughly one-third of judges believed Eugene was indeed a human boy who was just very awkward. There is no official footage of Eugene in action, but there have been some chat logs released. The following is a portion of a conversation between Eugene and a judge named Scott:

Scott: Which is bigger, a shoebox or Mount Everest?
Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from…
Scott: How many legs does a camel have?
Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it?
Scott: How many legs does a millipede have?
Eugene: Just two, but Chernobyl mutants may have up two five. I know you are supposed to trick me.
Scott: No, I need to know that you're not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?
Eugene: Something between 2 and 4. Maybe three? :-))) Oh, what a fruitful conversation ;-)

In 2018, Google took their shot at the Turing test with a robot called Google Duplex. Google Duplex had a very specific approach that nearly allowed it to pass the Turing test; all the robot did was call places like hair salons and restaurants and try to book an appointment. While the robot completely fooled every human it talked to, it was deemed too limited and specific to fully pass. Many consider this to be the closest thing to true A.I. Recently it was released to the public for download.

Since Google Duplex, there have been no major challengers for the Turing Test.

💖 💪 🙅 🚩
robertalopez
robertalopez

Posted on July 23, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

What was your win this week?
weeklyretro What was your win this week?

November 29, 2024

Where GitOps Meets ClickOps
devops Where GitOps Meets ClickOps

November 29, 2024

How to Use KitOps with MLflow
beginners How to Use KitOps with MLflow

November 29, 2024