International Institute of Communications

Shaping the policy agenda: TELECOMMUNICATIONS • MEDIA • TECHNOLOGY
Tel:+44 (0)20 8544 8076
Fax:+44 (0)20 8544 8077

social twitter sm  social linkedin sm  social youtube sm  social facebook sm

Alan Turing had it right all along

Alan Turing had it right all along

We should see chatbots as digital assistants, not pseudo-humans

The UK last week declared that Alan Turing would be the face on the new fifty pound note. Described by some as the father of computing science, Turing was famous for designing and building the computers that broke the German Enigma code during World War 2. But he also, in 1950, came up with the so-called ‘Turing Test’. At a time when many scientists were trying to create objective measures to analyse Artificial Intelligence, Turing proposed a simple test which stated, in essence, that if a computer could respond to a series of questions in a way that was indistinguishable from a human then it was, for all intents and purposes, ‘intelligent’.

The ‘ELIZA’ programme, developed in the 1960s, was able to create the illusion of a human conversation through pattern-matching, and was followed by A.L.I.C.E (Artificial Linguistic Internet Computer Entity) in the 1970s. Modern chatbots almost all operate on the principle that they are self-learning, and therefore improve with usage and experience. The generally accepted goal is that chatbots should, eventually, become indistinguishable from humans, thus passing the ultimate version of the Turing test.

The problem is that, in order to do this, chatbots would have to develop emotional as well as intellectual intelligence. Pressure to achieve this comes from some research evidence suggesting that humans respond better when they think they are talking to a human, whether they are or not. However, it’s reasonable to assume that the aim of making chatbots as human as possible isn’t the same as hoodwinking people into mistaking a bot for a human. The EU guidance on AI development principles is specific on this – it should always be clear whether responses are coming from humans or digital assistants. Moreover, while it may be amusing to ask a chatbot what colour its shoes are, it’s hard to see how the programme is improved by asking questions to which it can only be obliged to lie in response.

Instead, perhaps it would be better to recognise Digital Assistants for what they are – a convenient means of searching a database and acting accordingly. Empathetic responses can be left to the assistants best placed to provide them – human beings. After all, the Turing Test said nothing about machines needing to have emotional intelligence. Something to remember if, sometime next year, you find yourself in possession of a fifty pound note.

andrea millwood hargraveAndrea Millwood Hargrave,
Director General, International Institute of Communications

  • Monday, 22 July 2019

Leave a comment

You are commenting as guest.

Stay up to date with the IIC

Tell us how you'd like to stay informed about events, interviews and more from the IIC. 

My IIC Preferences

Follow us on Twitter