Monday, July 15, 2024
HomeCulture WarWhy it’s stupid to fear Artificial Intelligence

Why it’s stupid to fear Artificial Intelligence


APPARENTLY Artificial Intelligence is on the point of becoming truly intelligentIt’s taking our jobs, will shortly become conscious and thus make humans irrelevant. This story appears regularly, about as often as the promise of self-driving cars, and seems about as likely to be true. Sean Walsh had a go at busting this myth last year on these pages, but it needs a periodic kicking and today is my turn to put the boot in.

Defining intelligence is remarkably difficult. Alan Turing provided a famous test, bearing his name; if a computer acts, reacts and interacts like a sentient being, then call it sentient. Turing suggested an ‘imitation game’: a remote human interrogator must distinguish between a computer and a human subject based on their replies to various questions posed by the interrogator. Fans of Blade Runner will be familiar with the Voigt-Kampff test, a series of questions used by the LAPD Blade Runner unit to distinguish androids from humans (eg ‘Describe in single words only the good things that come into your mind about your mother’).

But both these tests are superficial, focusing on outcomes rather than process. The American philosopher John Searle countered the Turing test with his ‘Chinese room‘ argument: suppose someone who knows no Chinese is locked in a room with a large set of Chinese characters and a manual which shows how to match questions in Chinese with appropriate responses from the Chinese characters. The room has a slot through which questions can be inserted and another slot through which responses (as set out in the manual) can be placed. To Chinese speakers outside, the room will have passed the Turing test. However, since the person in the room does not know Chinese and is just following the manual, no actual thinking is happening. Or, as the scientist Brett Weinstein has put it, AI may simulate meaning without knowing what it is saying. One study concluded that AI learns in the same way as a pigeon and another has found that humans learn in a different way.

AI self-consciousness increasingly resembles medieval intellectuals’ pursuit of alchemy. Defining consciousness is even trickier than defining intelligence. If AI is ever deemed to be conscious, will it then have the free will necessary to be held accountable for its actions? Could AI be taken to court for behaviours that would be considered criminal by humans, and what would be the appropriate punishments? To pose the questions is to realise the absurdity of asking them.

AI is and will remain just a very clever pattern recognition and imitation tool. This is useful for analysing medical results and producing facsimiles of original artistic ideas. It’s reasonably good at passing exams (and will probably get better) and will replace many white-collar jobs which are substantially task repetition. It can also verify things we instinctively know but can’t crunch the data to prove, such as conservative women being happier and better looking. But it has yet to do anything convincing beyond this.

AI will make you doubt anything you believe. Deep fakes, simulations which can put any words, in any voice, into someone else’s mouth already exist.  AI can write better malicious software if you ask it the right questions.

How to defeat it? As AI tries to fool us, so we learn its patterns and weaknesses. The smarty pants technology can still (just about) be defeated by asking it to tick a box on a screen. A Go player used AI to find weaknesses in Go playing programs and use it to best the software. There’s also great fun to be had turning AI against its programming; online trolls have enjoyed making AI do things it’s supposedly told not to do, such as praising Hitler. Just being old-fashioned may be a powerful weapon; the ability of AI to do exam coursework will lead to the reversion to pen and paper exams.

If you’ve not used an AI chatbot then I recommend it as the best way to dispel any fears of IT domination. ChapGPT 3.5 is free as long as you register (version 4 will cost you $20 a month). I asked it for the origins of Covid 19: ‘While the precise origin of SARS-CoV-2 remains unclear, the prevailing scientific consensus supports a natural origin involving animal-to-human transmission.’ AI has been caught making politically correct judgements and here one can see its tendency to parrot establishment thinking; the explicit and implicit prejudices of the programmers are buried deep within the software and are often not difficult to find if you ask the right questions.

AI has also been caught lying. When I asked ChatGPT the Voight-Kampff question above, it gave me an answer (‘Kind. Loving. Supportive. Generous. Caring. Strong. Patient. Understanding. Warm. Inspiring.’) that was clearly dishonest as it doesn’t have a mother and it wasn’t smart enough to know this would be obvious to any of its users.

I asked it to write 500 words on the benefits of Covid vaccination in the style of a government press release and it duly obliged; it produced a good slogan (‘Stay Safe. Get Vaccinated. Protect Our Future.’), which one could imagine seeing on billboards. I then asked it to write about the dangers of Covid vaccination in the style of TCW and, surprisingly, it did it. What it produced wouldn’t get past our editor (the possibility of adverse reactions was ‘rare’), although it acknowledged mRNA technology was new and untried and that ‘the psychological pressure and fear-mongering associated with the pandemic and the vaccine rollout have taken a toll on mental health, exacerbating anxiety and distrust’.

What was interesting were the similarities between these two diametrically opposed articles. They presented their cases as bullet pointed paragraphs, like notes to a Powerpoint presentation. Sentences tended to be long. Neither quoted any individual, organisation or study to back up any point made. The writing style had the personality of an instruction manual, as bland as a Taylor Swift song, with nothing to provoke (like taking a cheap shot at Taylor Swift), amuse or inspire. Teachers worried about pupils using AI to do homework will quickly spot the monotonous similarity of such output.

‘The trouble with computers’, remarked Tom Baker’s Doctor Who, ‘is that they’re very sophisticated idiots.’ We should trust AI no more than we should trust ‘experts’. Thinking is hard, said Carl Jung, which is why people judge. Or they have someone else do the thinking for them. The greatest danger of AI to humanity is that we believe the hype or are too lazy to call it out.

If you appreciated this article, perhaps you might consider making a donation to The Conservative Woman. Unlike most other websites, we receive no independent funding. Our editors are unpaid and work entirely voluntarily as do the majority of our contributors but there are inevitable costs associated with running a website. We depend on our readers to help us, either with regular or one-off payments. You can donate here. Thank you.
If you have not already signed up to a daily email alert of new articles please do so. It is here and free! Thank you.

Vlod Barchuk
Vlod Barchuk
Vlod Barchuk is a former accountant, former Tory councillor and current chairman of Ealing Central and Acton Conservative Party Association.

Sign up for TCW Daily

Each morning we send The ConWom Daily with links to our latest news. This is a free service and we will never share your details.