On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
Charles Babbage, Passages from the Life of a Philosopher[
Once upon a time, there was a computer program called ELIZA. (By “once upon a time” I mean the mid-1960s, which is more or less equivalent to the Triassic era when it comes to computer software.) ELIZA was the first attempt at a chatbot: it tried to hold a conversation with the user based entirely on linguistic rules, with absolutely no attempt at a conceptual model of what the conversation was about. It was not, in any useful sense, artificial intelligence.
The author of the program was, however, surprised to find that many users thought otherwise. They conversed with it as if it were a person. This tendency that people show to treat computers as if they were sentient is known as the ELIZA effect, and we all do it, even those of us who know better. Apart from anything else, everyone swears at them, at least occasionally.
Which brings us, of course, to the oracle du jour: ChatGPT. It is, ultimately, ELIZA on steroids. Where ELIZA was reliant on a pre-selected list of keywords, ChatGPT is “trained” on a pre-selected set of data – a much, much bigger set of data than ELIZA, but it’s the same principle. It’s tweaked so that its answers to questions appear plausible, and they do, even when they turn out to be wrong.
Because the old adage still applies: garbage in, garbage out. You can get ChatGPT to say whatever you want it to say. It’s a computer program, after all. Computers will believe whatever you tell them; anyone developing software soon discovers that a lot of your effort will be going into trying to make the darn thing less gullible, so that it rejects claims that the user was born in 1742, for example.
Now it is true that a lot of jobs currently done by human beings largely consist of regurgitating information in a more or less plausible summary, and in principle those jobs could now be automated away. I doubt whether many of the people currently doing those jobs will be deprived of much in the way of job satisfaction, but they will certainly miss the income. That could certainly lead to some interesting political outcomes.
But of course in practice doing that would consume computing resources – and physical resources – on an even more epic scale that we get through them already. Most people assume that the Internet just happens. They aren’t aware of the massive server farms that make the whole thing go. Some of these places rely on a daily lorry-load of hard drives to replace the ones that just failed – and hard drives these days don’t fail that often. That lorry-load of hard drives didn’t just happen either: that’s a lot of mining and refining and fabricating right there.
Also there is the little matter of electricity. At the same time that we are supposedly replacing the world’s fleet of petrol- and diesel-powered vehicles with electrically-powered versions (which for many use cases such as tractors or the aforementioned lorries don’t actually exist yet or aren’t ready for prime-time), and also at the same time that we are supposedly generating our electricity without resort to fossil fuels, there is no way in Hades we are going to be building out all those extra data-centres.
By the way, the mining and refining and fabricating I mentioned earlier is a lot of the same mining and refining and fabricating that will be needed to make all those new EVs. Those things require scarce and expensive resources. It will be interesting to see how long we will be able to sustain an economy built on unobtainium.
Still, if we collectively decided that rendering large numbers of middle-class people unemployed was a more important goal than, say, reducing the impact of climate change, I suppose we could technically do it. Obviously that would be a monumentally stupid decision, but it wouldn’t be the first one and I don’t suppose it would be the last. What then?
Well, from at least the French Revolution onwards – and a case could be made for the English Civil Wars – it seems that the most certain way to cause a revolution is to have plenty of disaffected middle-class people: people like Robespierre, who was a small-town lawyer, or Lenin, the son of an inspector of schools. The really poor and oppressed people are usually too busy just trying to get by; those with an education but no prospect of getting advantage from it are more likely to cause serious trouble.
At least some countries in the industrialised world are on this path already. It’s not hard to find plenty of accounts of how the middle classes are being squeezed, such as this report from the OECD, not generally thought of as a nest of lefty firebrands, or this article in notorious Marxist rag Forbes magazine. Still, from a purely selfish point of view, I’d rather all this didn’t kick off in my lifetime, but history happens when it happens.
To sum up, then, ChatGPT has the technical potential to generate automatically much of the bullshit that is at present crafted by hand. If it does so on a sufficiently large scale, inaccurate Internet search results will be the least of our worries. I am sure it will be tried. I am less sure that it will succeed, if you can call it success.
The one useful thing that it might achieve is to encourage some fraction of the bullshit-generating classes to do something more constructive with their time. But of course they don’t need a computer to do that.
Comments are welcome, but I do pre-moderate them to make sure they comply with the house rules.