In only a few short days after writing my previous blog, the news came in that a machine has passed the Turing Test. It’s a relatively simple test. Proposed by Alan Turing, the WWII codebreaking genius and computer pioneer, he simply suggested that if a computer can convince a human in conversation that they are talking to another human, then the computer is intelligent.
Each year, the British Royal Society holds a competition and no machine has passed the test until now. ‘Eugene Goostman’, the teenage simulacrum programmed by two Russians, convinced 10 out of 30 judges of his human credentials. Others are not so sure.
One of the judges, Robert Llewellyn, who played the android Kryten in the sci-fi sit-com Red Dwarf, admits to having been fooled for about ten minutes. There was some ineffable quality, some gut instinct telling him something was off but, interestingly, when he thought too much about the exchange, he plumped for the warm-blooded brain each time. As pattern-seeking entities, our pre-cognitive instincts often seem to be much more incisive and accurate than our brain’s emergent properties that so distinguish us from all other sentient beings.
So has ‘Eugene’ blasted Ray Kurtzweil’s prediction that the Turing Test would be passed between 2020 and 2050? Probably not, but maybe-sort-of.
Kurtzweil, who is now leading Google’s attempts at creating artificial intelligence, has a habit of making accurate predictions – he says of himself that of 108 of his predictions so far, 89 have come to pass. OK, so a thirty-year window for the emergence of AI is a bit vague but that’s also a pretty good hit rate.
Kurtzweil’s more specific prediction is that by 2029, computers will be able to do everything people do but better, and that ‘The Singularity‘ will be reached in 2045. The (technological) singularity is a hypothetical metaphor for a moment in the future when artificial intelligence surpasses human intelligence, radically altering our entire culture and world. This means that within thirty years, we humans will potentially give birth to our evolutionary successors. In a recent interview, not too long after taking up employment with Google (with his single-sentence job description), Kurzweil, who we’re told makes terrible coffee, makes the point that technological advancement has become exponential (see Moore’s Law) – each apparently small technological iteration is actually huge (think about how, due to exponential growth, you can’t fold a sheet of paper more than eight or nine times). This is why the timeframe towards real artificial intelligence is so close.
The real power of The Singularity is its potency as a metaphor. It’s an enigmatic leitmotif that has woven its way into our popular culture from being a key concept in the narrative of The Matrix to the mise en scene of The Terminator. The name has an aura of scientific truth about it, yet at the same time, it carries with it much more ancient associations.
If you’re a pessimist, it conjures up the many ‘end of days’ narratives found in ancient cultures across the world – eschatology seems to be a speciality of humans from the Mayan calendar to gnostic movements such as the Cathars, Christianity’s day of judgement (the latter also being the sub-title for The Terminator 2).
If, you’re an optimist, you might see The Singularity in terms of religious metaphors around paradise, heaven on earth, the Garden of Eden, or Enlightment ideas about utopia and scientific positivism whereby science and technology leads us inexorably to a better world.
The post-modern turn greatly problematised these ideas (and which The Matrix so brilliantly captured) – reality is now something to be churned up and reconfigured, without fixed meaning (or as much as before, at least). We’re in the Age of Disruption.
Criticism and philosophical blatherings
We shouldn’t be too harsh on ‘Eugene’, though. He’s just a victim of the system. To ground any conversation about artificial intelligence, we need to keep in mind the limitations of the Turing Test. And for that, I like to think back to the philosophical milieu that contributed to Turing’s idea.
In 1949, the British philosopher Gilbert Ryle published ‘The Concept of Mind’. He, along with Ludwig Wittgenstein, were among a group of ‘ordinary language philosophers’ who, having failed to find evidence of a ‘soul’ or a singular site in the body responsible for consciousness as Descartes claimes, decided to look at things logically. In his book, he tells a story about a man who asks to see Cambridge university; the dean obliges the man and brings him on a tour showing him the library, lecture theatres and so on. At the end of the tour, the man says that he’s pleased to have seen the library, lecture theatres and all the other buildings but he still hasn’t been shown the university. This is Ryle’s way of explaining a ‘category mistake’ – the ‘university’ is the combination of all these buildings and their related functions. Likewise, you’ll never find ‘the mind’ because it’s just not one single thing. Ryle made the point that, by examining our daily language, we unconsciously use concepts that misrepresent reality but, by applying some logical analysis, we can clear up the mess.
Ryle is considered a ‘logical behaviourist’, partly because he analysed ordinary language logically, we only have our observation of others’ behaviours to go on when judging whether anyone other than oneself is conscious and intelligent. This is known in the biz as the ‘other minds problem’ and it has an ancient legacy. We can’t know for sure if anyone or anything is conscious, but we can ascribe capacities based on our observation of those entities’ behaviours. Among his many contributions, Ludwig Wittgenstein, also an ordinary language philosopher, developed logical truth tables to analyse complex propositions. Alan Turing would have been aware of Ryle’s and Wittgenstein’s ideas, and it’s this logical behaviourist position that formed the basis of the Turing Test: in a very simple, everyday, ordinary way, if something seems intelligent, it must be intelligent.
The idea is not without its critics, though. John Searle has made a career of insisting that this idea mixes up the idea of computation with understanding; machines are black boxes that take data and churn out something that may mean something to the human reading the output but the machine couldn’t care less. The machine doesn’t have a mind, it can never possess ‘mental content’, it can never create meaning and, therefore, does not have a mind. But isn’t this just a contrived way of saying a computer doesn’t have a soul?
Conversely, Daniel Dennett, a student of Gilbert Ryle, thinks of consciousness as an ’emergent property’ or our physical bodies in relation to the physical world; we’re like any other animal except that we’re just much more complex. We are, and we are not, machines. We have free will, but it’s also a bit of an illusion. We have different ways to talk about this – a ‘soul’, a ‘mind’, etc. – but, he says, they all seek to describe the same phenomena: the emergent properties of our physical bodies. Don’t be fooled, though, consciousness is essentially a bit of a ‘magic trick‘ played out by evolution.
It’s not a million years (literally) since, during the Spanish conquest of South America, a debate raged between clerics about whether the native peoples were truly human. They looked human, the seemed to behave as humans, they seemed to display intelligence, but did they have souls? When eventually Sublimus Dei was issued as a papal bull in 1537 by Pope Paul III, it was perhaps too late but at least clerics agreed that it would be an abomination unto God to enslave the native peoples.
The great film Blade Runner is a meditation of sorts on this bloody episode in human history. Dredging up ancient history like this when discussing a chatbot passing the Turing Test (if it really did) may seem ridiculous but science fiction is just as important an expression of our culture as any other art. As Marvin Minsky, the cognitive scientist, says, “We build complex machines not to understand how the machine works, but to understand ourselves”, and this is also the role of science fiction. It gives us permission, in the context of modernity and technology, to consider the deep moral and existential problems we face now and into the future.
‘Eugene’ is one small step for man towards better knowing our own humanity.