Industrial civilisation is in love with numbers. We have numbers on pretty much everything that can be quantified, and some things that arguably can’t. When someone wishes to assert incontrovertibly that X is the case, the magic words are: “Studies show…”. And how do the studies purport to show that X is true? Statistically, that’s how.
Like so many things that appear to us to be immutable truths carved into the primordial ground of cosmic being, this tendency has a history and is the result of human choices. We first find the appeal to statistics in the eighteenth century, originally in connection with matters relating to governing a state (hence the name) such as population and economic activity. The following century saw the formation of the Royal Statistical Society in London, and the use of statistics by political and other campaigners to advance their cause; one famous example is Florence Nightingale, the Society’s first female member, who popularised the pie-chart as a means of visualising data and thereby presenting it in an accessible form.
It is not, I suspect, a coincidence that this parallels the development of the Industrial Revolution, whose consequences I discussed in a previous post. It is a natural part of the standardising tendency which gave us the Whitworth screw thread, SATS and the Big Mac. We want the world to be made up of things which are all the same.
This is obviously a prerequisite for counting things. If you have a pencil, three bananas and a bicycle, you don’t really have five of anything. If, on the other hand, you can say that one lump of pig-iron is much the same as another, then knowing you have five of those is a useful thing. Of course, it’s not actually true to say that all lumps of pig-iron are strictly equivalent; things like the carbon content will vary from one to another; but you may not need to worry about that if all you want to know is how many tons your foundry produced this week.
Then again, if you care about the quality of your pig-iron, knowing that you have five lumps of it is not especially useful. So the meaning of a statistic depends on what is being counted and why. Who is doing the counting may also be significant; human motivation is rarely pure.
Consider official unemployment figures. No government would like these to be higher than they need to be, and it is much easier and quicker to fix the numbers than it is to fix the economy. Thus the definition of unemployment becomes remarkably fluid, in order to leave out as many people as possible. In the UK, for example, you need to be unemployed and claiming a government benefit. Unsurprisingly this is coupled with a benefits system of Byzantine complexity which has all the hallmarks of having been designed to deter applicants.
But in any case, is counting people really the same sort of thing as counting pig-iron? You may (or may not) be familiar with King David’s census of the Israelites (2 Samuel 24, if you want to look it up). The king soon came to regret it: “And David’s heart smote him after that he had numbered the people. And David said unto the LORD, I have sinned greatly in that I have done: and now, I beseech thee, O LORD, take away the iniquity of thy servant; for I have done very foolishly.”
The biblical text doesn’t make clear why this was such a bad idea, apart from the punishments that follow, but there’s a case to be made that this that this is a reaction against the kind of state bureaucracy that flourished in both Mesopotamia and Egypt at this period. Here I am following the account of state-formation in James C. Scott’s Against the Grain (Yale University Press, 2017), and while I’m not going to attempt a summary here, the relevant point is that ancient bean-counters started off counting measures of barley and ended by counting pretty much everything.
Now there will be some variability even between measures of barley, but it seems intuitively clear that human beings are individuals – indeed, we even use the word individual to refer to a person. Moreover, they have relationships with one another. What does it mean to treat another human being as a countable unit, like a measure of barley or a lump of pig-iron? Surely it is not the way most of us would want to be thought of. It is a denial of one’s basic humanity.
But when one is dealing with large numbers of people – more, let’s say, than Dunbar’s number – it is inevitable that one has to resort to this approach. It’s the only practical way of keeping on top of things, and early states were all about keeping on top of things. This appears to have been why writing was invented, and why so much of the vast corpus of cuneiform texts is so dull.
Nowadays, of course, we have Big Data. This is largely a result of technological advances; scribes inscribing clay tablets can only record a limited amount. Thanks to the miracles of Progress, we are now able to collect, store, and analyse stupendous amounts of data, most of it about us. And because we can, we do. (The scribes of Third Dynasty Ur or Qin China would most certainly have done so, given the opportunity.)
In this context, “data” just means “counts of stuff,” where many of the things being counted are events – typically things that a person has done: they spent x seconds viewing such and such a web-page, they bought this thing, they liked that post. This has a market value, because companies can use that information to sell you stuff. Governments can also use that information to identify people they don’t like; the Chinese government already does this, and I’d be very surprised if they were the only one.
However much we may claim to dislike this state of affairs, we still put up with it. It does however give us some idea of why the ancient Israelites of 2 Samuel found the whole notion of counting people as if they were things so viscerally repugnant. And it is also dangerous, because data can be inaccurate and/or misinterpreted.
This can be the result of error, or it can be deliberate. Science itself is not immune to this: famous (or notorious) examples include Sir Cyril Burt, who fabricated data wholesale to support his ideas about the heritability of IQ, or more recently Dr Werner Bezwoda and his fraudulent cancer research. There may well be many more whose nefarious practices have not been discovered; there is a lot of research which has proved difficult or impossible to replicate. Scientists are themselves human beings, a point which we seem to find difficult to admit.
You can also build a mathematical model to interpret that data, which looks impressive and works beautifully until it doesn’t. This is what happened to Long-Term Capital Management, which went south in 1998 despite the presence on its board of Nobel Prize-winning economists, requiring a bail-out to the tune of $3.625 billion. They thought they understood their model. With modern statistically-based AI, of course, nobody understands how the algorithms work. Because they seem to function most of the time – as the LTCM model did until 1997 – it’s just assumed that we can rely on the results. You may not find that thought comforting. I certainly don’t.
When Hillary Clinton ran for the US Presidency in 2016, her campaign made heavy use of data analysis, all of which suggested that she would win. We know how that ended up. I was reminded at the time of the statistical approach taken by Robert McNamara in managing the Vietnam War, relying on the body count as his measure of success. That didn’t go too well either.
But this is more than a practical question. It has profound implications for how we deal with one another and with the world in general. Is the world, to borrow a phrase from Coleridge, “an immense heap of little things” to be reckoned and classified and managed, to be bought and sold and monetised? Are there not important truths which cannot fit into a spreadsheet? I suspect most people would agree that there are, but that’s not the way we do things. There is the quantified and the unquantified, and most of the time the quantified takes precedence.
Iain McGilchrist’s magisterial study The Master and His Emissary (Yale University Press, 2010) examines this division in the light of research into the workings of the human brain. It’s a fascinating, well-researched and thoughtful book, but ultimately rather depressing. There seems to be every likelihood that we will continue to be blind-sided by our obsession with numbers, just as LTCM did, failing on an epic scale to see the wood for the trees. Nobody felling a tree in the Amazon to clear land for soya-bean cultivation is doing so because they want to damage the planet. They end up doing so just the same.
The philosopher Martin Buber arrived at much the same insight from a different direction. He distinguished between I-Thou relations, in which the other is treated as a person, from I-It relations, where the other is a mere thing. When we count, we necessarily treat what is counted as things rather than persons. This may work well enough for pig-iron, but it doesn’t work for people and I would argue other living creatures too. Twenty thousand chickens in a broiler house can only be thought of statistically; a flock of half a dozen are individuals. If this reminds you of the famous remark attributed to Stalin – “A single death is a tragedy; a million deaths is a statistic” – that is not a coincidence.
Genocide can only happen when the victims are treated as things, not people. Nothing about the Holocaust is more striking than the way in which it was managed as an industrial process. Likewise, ecocide can only happen when living beings are treated as things, either as raw materials to be processed or as obstacles to be removed. Those long-dead Sumerian scribes weighing and recording barley never intended any of this, I’m sure, but that’s where we are.
Comments are welcome, but I pre-moderate them to make sure they comply with the house rules.