Discussion about this post

User's avatar
Simon Baddeley's avatar

What you're doing is excellent; interesting and useful interrogation of the 'stochastic parrot'. All I've done so far is to ensure the LLM replies with UK English spelling and does not, in its responses to me, mimic human conversation. Your exploration of questioning LLMs reminds me of Asimov's prescient 1950s sci-fi short story 'Jokester'. Be alert for an abrupt loss of sense of humour (:)). It also reminds me of a much older conundrum. When people in classical Greece travelled to question the oracle at Delphi (or its priestly doorkeepers) they didn't get straight answers - instead gnomic aphorisms were delivered.

Expand full comment
Lon Guyland's avatar

I wish you wouldn’t say that a computer or a computer program (that’s what an LLM is) can “understand” things.

They understand the subject matter in the same way your checkbook understands money or your spreadsheet understands math, which is to say not at all.

There is no “understanding” going on. The naïve may think it looks like it, but the clue is in the name: large language MODEL.

This anthropomorphization of computer programs is part of the marketing strategy of these things: they want consumers to believe that they are infallible oracles.

And I’m no luddite: I use ChatGPT and Gemini at work daily. They have saved me lots of time. But they have also gone badly off-track. In those cases, were I not experienced and able to apply what little wisdom the Creator may have generously bestowed on me, the outcome would have been unhelpful.

Expand full comment
8 more comments...

No posts