Published on [Permalink]
Reading time: 5 minutes
Posted in:

AI - Who Cares?

AKA Thoughts on Mid-faith Crisis 335: A senior pair likely in their golden years

We started off this week with Joe reading out to me a Meta-AI generated description of a selfie he had take of us together on a walk in the woods. Apparently AI suggested clicking a button to ‘Find out more about this couple’s relationship.’ Various responses were generated – the results are not consistent, apparently – but here’s one.

The couple in the photo appears to be a senior pair, likely in their golden years, enjoying quality time together. Given the context of the post, which mentions “a lovely weekend together” and “lots to talk about,” it seems they’re prioritising their relationship and making the most of their time together. The fact that they’re taking a selfie in a serene forest setting suggests they value shared experiences and capturing memories. Their relationship seems built on love, companionship, and a deep connection.

I suppose there’s nothing that wrong with the description. Although I take exception the ‘senior pair’ and ‘golden years’ labels. But the big question I have is, why? Why would anyone press this button? Why would anyone trust the results? It’s not as if Meta AI knows us. As far as I’m aware, no robotic pal was on the walk with us.

It’s not even as if it’s a detailed description of the photo – so no good for the visually impaired. Instead it’s some kind of interpretation, based not on any real insight, but purely on a statistical comparison.

Here we get to one of the things which makes me angry about AI: it’s unnecessary. Not in all cases, obviously. But in most of the cases I’ve seen, it does nothing that a human couldn’t do better, if they bothered to put their time and attention to it. If the reader actually bothered to think about this picture, they could gain their own insights. All this is really doing is adding in AI for no purpose whatsoever. It is using up energy, computing power, bandwidth, just because it could. Not because it was necessary or helpful.

If that’s the case, then, why does this button even exist? Just as some kind of demonstration. As a display. All this button is actually doing is showing off.

In this AI is no different to your normal, trolling, common-or-garden bullshitter. Most of those aren’t even interested in the debate or the topic. They just want some kind of reaction. They are clinically unable to resist the chance to show off their knowledge, even if they have no actual knowledge to show.

The dangers in both cases, though, are that people believe them. They might actually think it is real. And, unlike humans, well, some humans at least, AI can’t tell whether something is real or not. Its BS-detector has not actually been invented.

And the second thing that makes me angry is that people might think this kind of BS is real. They might actually trust it. Last week I posted a link to a great piece on the danger generative AI poses to the discipline of history. The button on Facebook doesn’t care whether it’s right or wrong. It’s not interested in that. It’s merely making some statistical guesses based on the kind of image it is being asked to process.

Now historians make those kind of guesses all the time. But they are only ever, at best, a starting point. After that you need evidence.

When it comes to generating content, though, AI doesn’t care about accuracy. It doesn’t care about the truth, because the truth is not a concept that it a computer algorithm can ever understand. That makes AI easily manipulable. You can ask it to do anything.

In the end AI is not a technology, it’s an ideology. It’s an ideology of technological progress without ethical considerations. An ideology of computerised laziness. It’s so excited that it can do these things, it stops wondering whether they were worth doing in the first place.

Why do I get so angry? Because in my work I care about accuracy. It’s important that I get things right. As a writer I want to find the right word or phrase, something different, something unique. As a historian I want to get my facts right. I’m free to speculate but only on if it is based on real, actual events and data. I’m not free to invent.

AI has no such limitations. Here’s a piece from that paper:

I have encountered it already, newspaper articles by Martin Luther King written after his assassination, apparently published in real newspapers from 1968, entirely manufactured and utterly inaccurate quotes from my own books, confidently narrated back to me as essay evidence. hitchcockian.medium.com/this-is-t…

There’s an even bigger problem, here. Because LLMs draw on a corpus of material on the web, these made up sources will become part of that corpus. They will become sources themselves. Put enough of them out there and those of us who care about the facts will find ourselves facing an endless onslaught of AI generated BS.

Finding actual sources is hard enough in history as it is. I have written many books in which I discover that the ‘source’ of some information is simply one historian repeating another. But that is nothing compared to the tsunami of fake facts which may well bury us all.

As I said, AI is an ideology. And at the root of that ideology is a single phrase: ‘Who Cares?’ We’re in an era of ‘Who cares.’ Who cares if the President lies the whole time? Who cares if the planet burns? Who cares if data centres suck up enough energy for whole cities? Who cares? As long as I don’t have to type anything in, or spend time doing actual research or, Heaven forbid, think for myself. Who cares as long as I get my Ice-cold Frappuccino and I can pass a moment of my meaningless time watching a cat do something funny on TikTok.

Who cares if something is actually true or not?

Listen or subscribe here