Will the second machine age kill off account planning?

A while back, I mused about the coming of the second machine age and the singularity. As computers get better and better at doing more and more human jobs, what will this mean for humans? (You might ask what it will also mean for computers, but they’re not conscious yet and can’t entertain such questions.)

As the first machine age made large swathes of people into animate detritus of the early 20th century, will the second age do the same to ‘creative’ sectors like advertising and, self-interest to declare here, account planners?

Well, it definitely looks like the role will have to radically transform.

The ‘big data’ explosion of information-gobbling Leviathans across the globe is reducing the work planners have had to put into finding, comparing, cross-referencing, controlling and analysing data when working on accounts.

This, theoretically, reduces their time spent on this, meaning they can see more and do more more quickly and more cheaply, their productivity skyrockets, their ‘insight-per-hourly-labour-input’ ratio goes through the roof and agencies and clients make buckets of money. Right?

There is no doubt that properly aggregated and deployed data helps advertising/communications much more targeted and effective. There’s a whole industry in market and advertising intelligence services where users can manipulate nearly real-time data be tweaking a few parameters here and there.

But Chris Kocek, in his ‘The Practical Pocket Guide to Account Planning‘, makes the point that as useful as all these data analytics tools are, planners actually have to spend quite a lot of time calibrating them to ensure that the results are actually relevant, useful and meaningful. It could actually take so much time to tweak these systems that a planner may have been better off sitting in a café, running a focus group or door-stepping some shoppers to find that killer insight. Or at least clarified their objectives and got the precise information required for the job through old fashioned means.

In my previous blog on the second machine age, I mentioned John Searle, the philosopher who dogmatically argued that machines may ape consciousness but they’ll never possess meaning. I can see both sides to this argument (but don’t totally agree).

On one hand, computers will do whatever we tell them, and if we tell them to become so complex and to cross-reference data so much that they begin to make very meaningful associations about the natural and social world and then begin acting in ways consistent with that, who are we to say they don’t possess meaning?

On the other hand, I still think there’s a qualitative difference between that and being a human with a nervous system (the human brain is the most complex structure in the universe), with thoughts and feelings and passions and feelings about memories (computers are brilliant at memory, but not so good on feeling) and, deep down, utterly incomprehensible irrationality kept in check by an unfathomably complex thing we call culture or society. (Does this bring me close to David Chalmers’ position on consciousness?)

Thought and feeling are inseparable anyway.

This is why we crave stories. Our civilisations are built around mystery and myth and meaning, all of which encompass our deepest truths, desires and fears.

If account planning is about uncovering these deep truths, based strongly on intuition, backed up by solid information, deployed by an agency to solve a client’s business problem by answering (and creating) people’s needs and desires, then how could machines replace this function without being (nearly) human themselves?

Only then might planners’ jobs be at risk from machines; but if machines eventually become as human as ourselves, then we’ll be colleagues so we might not even see things that way. Or, in another scenario, the robots will be better than us in every way, in which case we’re all doomed.

Eugene (not Oregon) on my mind

In only a few short days after writing my previous blog, the news came in that a machine has passed the Turing Test. It’s a relatively simple test. Proposed by Alan Turing, the WWII codebreaking genius and computer pioneer, he simply suggested that if a computer can convince a human in conversation that they are talking to another human, then the computer is intelligent.

Each year, the British Royal Society holds a competition and no machine has passed the test until now. ‘Eugene Goostman’, the teenage simulacrum programmed by two Russians, convinced 10 out of 30 judges of his human credentials. Others are not so sure.

One of the judges, Robert Llewellyn, who played the android Kryten in the sci-fi sit-com Red Dwarf, admits to having been fooled for about ten minutes. There was some ineffable quality, some gut instinct telling him something was off but, interestingly, when he thought too much about the exchange, he plumped for the warm-blooded brain each time. As pattern-seeking entities, our pre-cognitive instincts often seem to be much more incisive and accurate than our brain’s emergent properties that so distinguish us from all other sentient beings.

So has ‘Eugene’ blasted Ray Kurtzweil’s prediction that the Turing Test would be passed between 2020 and 2050? Probably not, but maybe-sort-of.

Kurtzweil, who is now leading Google’s attempts at creating artificial intelligence, has a habit of making accurate predictions – he says of himself that of 108 of his predictions so far, 89 have come to pass. OK, so a thirty-year window for the emergence of AI is a bit vague but that’s also a pretty good hit rate.

Kurtzweil’s more specific prediction is that by 2029, computers will be able to do everything people do but better, and that ‘The Singularity‘ will be reached in 2045. The (technological) singularity is a hypothetical metaphor for a moment in the future when artificial intelligence surpasses human intelligence, radically altering our entire culture and world. This means that within thirty years, we humans will potentially give birth to our evolutionary successors. In a recent interview, not too long after taking up employment with Google (with his single-sentence job description), Kurzweil, who we’re told makes terrible coffee, makes the point that technological advancement has become exponential (see Moore’s Law) – each apparently small technological iteration is actually huge (think about how, due to exponential growth, you can’t fold a sheet of paper more than eight or nine times). This is why the timeframe towards real artificial intelligence is so close.

The real power of The Singularity is its potency as a metaphor. It’s an enigmatic leitmotif that has woven its way into our popular culture from being a key concept in the narrative of The Matrix to the mise en scene of The Terminator. The name has an aura of scientific truth about it, yet at the same time, it carries with it much more ancient associations.

If you’re a pessimist, it conjures up the many ‘end of days’ narratives found in ancient cultures across the world – eschatology seems to be a speciality of humans from the Mayan calendar to gnostic movements such as the Cathars, Christianity’s day of judgement (the latter also being the sub-title for The Terminator 2).

If, you’re an optimist, you might see The Singularity in terms of religious metaphors around paradise, heaven on earth, the Garden of Eden, or Enlightment ideas about utopia and scientific positivism whereby science and technology leads us inexorably to a better world.

The post-modern turn greatly problematised these ideas (and which The Matrix so brilliantly captured) – reality is now something to be churned up and reconfigured, without fixed meaning (or as much as before, at least). We’re in the Age of Disruption.

Criticism and philosophical blatherings

We shouldn’t be too harsh on ‘Eugene’, though. He’s just a victim of the system. To ground any conversation about artificial intelligence, we need to keep in mind the limitations of the Turing Test. And for that, I like to think back to the philosophical milieu that contributed to Turing’s idea.

In 1949, the British philosopher Gilbert Ryle published ‘The Concept of Mind’. He, along with Ludwig Wittgenstein, were among a group of ‘ordinary language philosophers’ who, having failed to find evidence of a ‘soul’ or a singular site in the body responsible for consciousness as Descartes claimes, decided to look at things logically. In his book, he tells a story about a man who asks to see Cambridge university; the dean obliges the man and brings him on a tour showing him the library, lecture theatres and so on. At the end of the tour, the man says that he’s pleased to have seen the library, lecture theatres and all the other buildings but he still hasn’t been shown the university. This is Ryle’s way of explaining a ‘category mistake’ – the ‘university’ is the combination of all these buildings and their related functions. Likewise, you’ll never find ‘the mind’ because it’s just not one single thing. Ryle made the point that, by examining our daily language, we unconsciously use concepts that misrepresent reality but, by applying some logical analysis, we can clear up the mess.

Ryle is considered a ‘logical behaviourist’, partly because he analysed ordinary language logically, we only have our observation of others’ behaviours to go on when judging whether anyone other than oneself is conscious and intelligent. This is known in the biz as the ‘other minds problem’ and it has an ancient legacy. We can’t know for sure if anyone or anything is conscious, but we can ascribe capacities based on our observation of those entities’ behaviours. Among his many contributions, Ludwig Wittgenstein, also an ordinary language philosopher, developed logical truth tables to analyse complex propositions. Alan Turing would have been aware of Ryle’s and Wittgenstein’s ideas, and it’s this logical behaviourist position that formed the basis of the Turing Test: in a very simple, everyday, ordinary way, if something seems intelligent, it must be intelligent.

The idea is not without its critics, though. John Searle has made a career of insisting that this idea mixes up the idea of computation with understanding; machines are black boxes that take data and churn out something that may mean something to the human reading the output but the machine couldn’t care less. The machine doesn’t have a mind, it can never possess ‘mental content’, it can never create meaning and, therefore, does not have a mind. But isn’t this just a contrived way of saying a computer doesn’t have a soul?

Conversely, Daniel Dennett, a student of Gilbert Ryle, thinks of consciousness as an ’emergent property’ or our physical bodies in relation to the physical world; we’re like any other animal except that we’re just much more complex. We are, and we are not, machines. We have free will, but it’s also a bit of an illusion. We have different ways to talk about this – a ‘soul’, a ‘mind’, etc. –  but, he says, they all seek to describe the same phenomena: the emergent properties of our physical bodies. Don’t be fooled, though, consciousness is essentially a bit of a ‘magic trick‘ played out by evolution.

It’s not a million years (literally) since, during the Spanish conquest of South America, a debate raged between clerics about whether the native peoples were truly human. They looked human, the seemed to behave as humans, they seemed to display intelligence, but did they have souls? When eventually Sublimus Dei was issued as a papal bull in 1537 by Pope Paul III, it was perhaps too late but at least clerics agreed that it would be an abomination unto God to enslave the native peoples.

The great film Blade Runner is a meditation of sorts on this bloody episode in human history. Dredging up ancient history like this when discussing a chatbot passing the Turing Test (if it really did) may seem ridiculous but science fiction is just as important an expression of our culture as any other art. As Marvin Minsky, the cognitive scientist, says, “We build complex machines not to understand how the machine works, but to understand ourselves”, and this is also the role of science fiction.  It gives us permission, in the context of modernity and technology, to consider the deep moral and existential problems we face now and into the future.

‘Eugene’ is one small step for man towards better knowing our own humanity.