The ‘Artificial’ in ‘Artificial Intelligence’ is very real – Professor John Lennox, Oxford University
Cognition means thinking. Your machine is not thinking. When people say AI, they don’t mean AI. What they mean is a lot of brute force computation – Roger Schank, Northwestern University
CAN we all just calm down a bit? In the last few weeks, we have been subject to a cascade of nonsense about both the benefits and the dangers of machine intelligence. It’s even possible that these issues might grip the imagination of the media and chattering classes all the way into the middle of June before making way for the next apocalyptic panic.
On the one hand it is suggested (for example) that algorithmic ‘intelligence’ could be used as a significant and genuinely creative teaching aid; on the other that AI might develop to the point of ‘singularity’ and decide that humankind is superfluous to its requirements. At this point, it is suggested, it might mark us as worthy objects of a sort of technological euthanasia. (This is a concern recently articulated by the ‘founder’ of AI, Geoffrey Hinton: that it might go Hal-like.)
But this (occasionally histrionic, but more usually dull) set of discussions usually commits the sin of conflation: between ‘intelligence’ and ‘consciousness’. And if any analysis of the AI phenomenon assumes that machines are or will become conscious then it’s on to a non-starter. It is a metaphysical impossibility that a machine can be or could become sentient. Why? Because although it might be able to tell a joke, algorithm-based intelligence could never get one. And it is the getting of the joke which has the deeper metaphysical significance.
I had better explain that. An algorithm is a system of rules ordered towards a certain aim. Computers, and therefore machines or robots, are no more than complex collections of human-designed algorithms. And even if these algorithms become clever enough to generate yet more complex algorithms, the original author remains the same – the person, more pertinently the human person,who designed the original algorithm.
The tedious syntax of the algorithm can never aspire to the kaleidoscopic majesty of the human mind. There is no alchemy which could convert artificial intelligence into genuine consciousness.
Let’s pause to think about what consciousness is and what it facilitates. Consciousness is not purely functional and cannot be reduced to patterns of behaviour. It allows us the joy of love and the heartbreak of loss. It gifts us the recognition of sin and grants us the ability to forgive the transgressions of others. Consciousness is ‘awareness of the world from the perspective of the first person’.
And while we can concede that in some ways the computer might be more intelligent than us, this is not an argument that it is conscious; in fact it might point to the opposite conclusion.
Here’s an example. Chess supercomputers will beat most Grandmasters, but it is only if they throw a Kasparov-style hissy fit on losing that something like consciousness might be thought to be peeking through the silicon. If a robot loses to the world champion and upturns the chess board in spontaneous disgust, now thatmight get my attention.
Which brings us to the importance of the joke. There are few things more human, and therefore more serious, than the ability to laugh. The nature of humour is a criminally under-described topic in the philosophy of mind. But let’s start with this question: what does it mean to get a joke?
Amusement, in general, is a form of conscious awareness, awareness of a pleasurable sort. It requires an appreciation of nuance, the ability to be misdirected, and, most importantly, the application of acts of imagination which are quite beyond the ‘brute force computations’ which sustain the algorithms which are the beating heart of the robot.
An AI system – at best – would be like the humourless bore at the dinner party who pretend-laughs at the exchange of jokes which surround him while feeling nothing inside.
Humour, amusement, laughter are sophisticated features of the human mind which are unavailable to the silicon ‘mind’.
To return to the start of this piece: AI might be a useful tool in the classroom, but always a limited one. It will never own the imagination necessary to appreciate the subtleties of a Shakespeare comedy (although to be fair, neither do I).
And if increasing reliance on AI leads to some sort of Armageddon it will not be because a machine has chosen this, because that machine will lack the capacity for choice. It will be because AI carries within it the technological hubris of its original architects and veers off in a direction that was unforeseeable: not so much Hal as the unfortunate consequence of Dr Frankenstein’s tinkering with the mechanisms of human life.
A final point. When people talk about the ethics of the AI program, they usually get things the wrong way around. The ethical problems involved are not about what might happen if machines become conscious. The real moral issue is that as it becomes increasingly clear that machines will always and in principle lack consciousness, we might go the other way and think of ourselves asbeingmere machines rooted in the naturalistic order rather than unique souls created in the image of God.
But I guess that’s another discussion. For now, let’s just laugh at the idea that a machine could ever get a joke.