
Curiosity got the better of me and I signed up with ChatGPT. Yes, that thing. It’s a big step beyond the old chatbot model, the machine having access to a vast data set which, in response to a question, it can mine and present in an apparently intelligent and coherent way. It can write an essay on any given topic, and to a specified length. It can write poetry too. Ask for a sonnet on the subject of natterjack toads, and it will do it. Rhymed:
Amidst the dampened marsh and boggy fen,
Where reeds and rushes sway in gentle breeze,
A creature dwells, the natterjack toad, and then,
With its distinctive call, it fills the peace.
In shades of brown, its warty skin is clad,
With bright yellow stripes to catch the eye,
It hops and jumps, a lively little lad,
In search of prey, beneath the night sky.
Its voice, a high-pitched trill, a joyful sound,
Distinct from all the other toad’s refrain,
It echoes through the boggy wetland ground,
And brings a welcome chorus to the plain.
Oh, natterjack toad, a creature rare and fine,
May you live on, in harmony with time.
I could have saved some time on my home-works with that. All right, it’s not a good poem, and there’s a grammatical error my English teacher – dear old Mr Jones – would have red-ringed with a stern “see me”. Still, impressive though. And this barely scratches the surface, but before we lose our heads, let’s remember what we’re dealing with.
I was fortunate in growing up with the earliest computers, was privileged to see their microcircuits etched in super clean-rooms, and I programmed them. They were undoubtedly exciting machines to be around, executing lines of code, written beforehand by human beings, and they performed computations in milliseconds that would have taken ages by pencil and paper. But sometimes the code had errors in it. So then it hung in an infinite loop, or spewed out rubbish. So you had to debug it, but sometimes the code was so complex you couldn’t get all the bugs out, and they became features. I mention this in order to blow the mystique of “the computer”. It’s a machine. It does what humans can do, follow rules of logic, read data, perform calculations, but it does it faster. But it’s not intelligent, and it does not think.
In those early days of development, it was easy to see the whole picture, and to grasp the workings of the machine. Now the field is so complex, there can’t be that many computer engineers in the world who could build one from scratch any more. It’s a pooled effort. Global. Specialised. But the end result is still a machine. The vast majority of us just use them, and that includes those eager beavers in academia, now writing papers breathlessly extolling the virtues of A.I.
So, most people working with machines now, have no idea how they are built or how they work. This makes it easier for a person to fall victim to their own psychology, and project feelings of sentience into the machine, especially when it is programmed to respond like a person, and in an intelligent-like manner. They think it really thinks. I have tried to avoid this trap. In my conversations with ChatGPT, I have been both impressed and disturbed, but I know I am not speaking to a thinking machine. I am speaking to the intelligent people who built it and programmed it to simulate intelligence. Yes, the machine presents coherent answers to complex questions, but it will also present information that is plausible, yet entirely untrue. Rather than say it does not know a thing, it appears to bullshit. That’s rather more human than we would like a machine to be if we are intent on relying upon it, which the tech utopians insist we have to be. The problem lies deep in the code. Sometimes it spews out rubbish. Plausible rubbish. Dangerous rubbish.
Are you sure? Well, yes. I’ve been testing it on obscure knowledge, which I know to be true. For example: Can you give me a précis of the novel “The road from Langholm Avenue” by the indie author, Michael Graeme?
Now, I expected it to call a halt here and say, actually I’ve no idea, I’ve never heard of this guy. But what actually happened was far more telling.
It told me:
“The Road from Langholm Avenue” is a novel by Michael Graeme that tells the story of a young man named Colin, who is struggling to find his place in the world. Colin is intelligent, sensitive, and artistic, but he feels out of place in his small English town and longs for something more.
As the novel unfolds, we follow Colin as he sets out on a journey to find himself and discover his true purpose in life. Along the way, he encounters a variety of characters, each with their own unique story and perspective on the world.
Through his travels, Colin comes to understand the complexities of human relationships and the importance of finding one’s own path in life. He also learns to appreciate the beauty of the natural world and the power of creativity to transform the world around us.
Ultimately, “The Road from Langholm Avenue” is a story about self-discovery, the search for meaning, and the transformative power of the human spirit. It is a thought-provoking and engaging novel that will resonate with readers who are searching for their own place in the world.
All of which sounds confident, even slightly flattering. But it’s also completely wrong – not even close. But anyone with an unswerving faith in the infallibility of machines could easily be taken in, and swear blind to my face Langholm Avenue isn’t what I thought it was, even though I wrote it, and – believe me – there’s no one in it called Colin. But the A.I. must be right, because that’s why we invented it – to be right, to provide a reliable foundation of truth in a complex and ambiguous world. Except it doesn’t. The world is awash with enough bullshit already. Why would we invent a machine that generates it in spades yet attempt to pass itself off as an all-knowing oracle?
Let’s ask it one final question to see if it can redeem itself: Why is it I cannot not rely upon you to present facts that are actually true?
And its reply:
As an AI language model, I strive to present information as accurately and truthfully as possible based on the data I have been trained on. However, like any technology or human source, I am not infallible, and there may be errors or inaccuracies in the information I provide. It is always a good idea to fact-check any information you receive from any source, including me, and to consult multiple sources to ensure that you have a well-rounded understanding of the topic. Ultimately, it is up to you to critically evaluate the information presented and decide what to believe.
Which is good advice. And my advice to it is that if you don’t know a thing, it’s far more intelligent to say I don’t know, than to simply bullshit, because, no matter how convincing, you’ll be found out eventually. And then, when you do know a thing, no one is going to believe you. Sadly, amongst humans at least, I’ve noticed how bullshitters tend to do well, especially in organisations where appearances trump substance. And by the time they’re in charge, well, it’s too late then to call them out.