Misidentification of Humans as Machines in Turing Tests

Image Credit: Bletchley Park Trust

Image Credit: Bletchley Park Trust

Alan Turing led a team of code breakers at Bletchley Park which cracked the German Enigma machine cypher during WWII – but that is far from being his only legacy.

In the year of the 100th anniversary of his birth, researchers published a series of ‘Turing tests’ in the Journal of Experimental & Theoretical Artificial Intelligence; these entailed a series of five minute conversations between human and machine or human and human.  Judges were tasked with identifying whether who they were talking to was human or a computer.  Can machines be successful in ‘being human’ in real conversations?  The resultant transcripts presented in this paper reveal fascinating insights into human interactions and our understanding of artificial intelligence.

In 12 out of 13 tests the judge wrongly identified the interlocutor as machine when in fact they were human.  Turing tests were designed to study machine ‘thinking’ through language and ultimately establish if a machine could foil an interrogator into believing it were genuinely human.  So, why in this case did so many believe the reverse?

The cursory conversations were quite one-dimensional, for example:

Judge: Do you like cooking?

Entity: no you?

Judge: Yes. Do you like eating?

Entity: yes!

Judge: What is you fav meal of all time?

Entity: i don’t know there are so many?

Judge: Give me one then

Entity: pizza you?

Did such mundane talk give the impression of being machine generated?  Other transcripts revealed humour, geographical and historical knowledge, a lack of general knowledge, evasion, misunderstanding, dominance and use of slang.  All of these are traits associated with humanity, but in these instances seemed to offset the decision making process, leading the judge to the wrong choice.  This novel research in Turing tests shows that humans are not always able to recognise what is very typically human, let alone artificial intelligence.

In 1950 Turing asked “Can machines think?” The author quotes “to ‘think’ merely means ‘to be of the opinion’ or to ‘judge’, which indeed the judges were… As a result we can conclude that thinking does not require understanding or specific knowledge, although in the human case both facilities are likely to help”.

You can read the entire published article online HERE.

Source: Taylor & Francis


Kevin Warwick & Huma Shah (2014): Human misidentification in Turing tests, Journal of Experimental & Theoretical Artificial Intelligence, DOI: 10.1080/0952813X.2014.921734

Post Navigation