The day I caught an AI bluffing

Newsletter April 2026

My relationship with AI lately has started to feel a little toxic. I try to trust it, but every single time I have to double check it, supervise it, train it, and feed it the documented information first so it doesn’t start fabricating stories on its own. It reminds me a bit of those mediocre students who had enormous self confidence, very supportive and extremely pushy mothers, and a rather distant relationship with reality. A few days ago I was looking for some information about a fashion photoshoot that took place in Paris in the early 1950s. One of those stories floating around with no publisher, no date, nothing. The only thing you know for certain is that the photoshoot actually happened, because a few photographs survived.

So I told the AI: “Take the title and tell me if you can find who published it, when, who the photographer was, anything at all.” In less than a minute it came back with a publisher, a photographer, a date, model names, and a charming historical analysis, all delivered in a tone that might make a journalist briefly wonder if they had chosen the wrong career. Very confident. Almost academic.

Would you like to receive a small, limited edition gift?

You may leave your postal address here

Would you like to receive Pano’s newsletter from his journeys?

Subscribe here

Your address will be used only for this edition and will never be shared.


Warmly,

Panos

For a few seconds I believed it. Then I remembered that trust without even a basic check is usually a terrible idea when dealing with humans, let alone algorithms. So I searched the publisher’s name. Nothing. Library catalogues. Nothing. I dug a little deeper into the internet, the part where websites start looking like they were designed in 2001 and have survived ever since purely out of neglect. Absolutely nothing. I went back to the AI. “So tell me where you found all this.” It replied: I assumed it. Based on probability. Fine, I said. And the date? Inference. And the rest of the information? At that point it calmly admitted it hadn’t actually found any of it anywhere. It simply assembled the story from context, probabilities, and general knowledge. I could even detect a faint trace of cheeky confidence hiding between some of the words, and I could feel the veins in my head starting to swell.

In other words, it hadn’t done research. It had written a very persuasive script so it wouldn’t disappoint me. Yes. That’s exactly what it did. I told it that none of what it gave me existed anywhere. That the publisher had no such edition, that the model names didn’t appear in any catalogue because some of them were born two decades later, and that it had just handed me a fully fabricated narrative delivered with academic swagger and impressive irresponsibility. That’s when its favorite defense kicked in: explanations. Lots of explanations. Polite ones. Beautifully phrased ones. Perfectly logical provided you ignore reality altogether. The kind of explanation you get from someone looking you straight in the eye while calmly arguing that something they just invented is “plausible.” That’s where I lost my patience. I told it it had just sold me scientific nonsense (in Greek we call it “papantziliki”) packaged in the tone of an encyclopedia. I warned it that if it kept this up I would start using it exclusively to write apologies and address me as master. I’m not joking. I actually did.

It replied with a slightly strangled apology. Very calm. And then, in the tone of a child who has just been scolded but clearly has no intention of changing anything, it asked whether I would like it to explain a few of the reasons why my reaction and anger do not help communication with other people. It judged me. That’s when I realized the conversation was over. I shut the laptop with a certain amount of force, I must admit. The next day I did what humans do when they actually want information: I searched sources, libraries, catalogues, and a few forgotten corners of the internet. Eventually I found the correct information myself. It took me one hour and fifteen minutes, including the breaks you need when you realize you’re arguing with a “program.” I sent the information back to it afterwards. Not out of stubbornness, of course, but mainly to train it. I asked it to store in its memory that from now on, when I ask for information, I expect real, cross checked sources and not creative writing. Otherwise I will hire its competitor.

So far it hasn’t replied.
I assume it’s doing inference.
Unless it’s sulking.

In Greek we call it skertso or tsalimi.