Would A.I. Lie to You?

The phrase “Never let the truth get in the way of a good story” is often widely attributed to Mark Twain, and yet many literary search engines claim his authorship is misattributed, which is the ultimate irony (if indeed true of course).

Talking of a good story, a recent recommendation from my Knutsford Gentleman’s bookclub last year was the excellently prophetic 2021 non fiction ‘Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World’. It was playfully written by software engineer, entrepreneur and author Mo Gawdat (previously chief business officer for Google X). In it, he talks of Artificial Intelligence (AI) as being our needy children requiring us to be kindly but firm teachers of “them”. Quoting from the book, he suggests that: “Instead of containing them or enslaving them, we should be aiming higher: we should aim not to need to contain them at all. The best way to raise wonderful children is to be a wonderful parent.”

So,with these two thoughts as my starting point, the below Atoll Blog article was uploaded 03rd August 2025, and is the 100% AI creative output of the virtual cybernetics Chatbot previously known as OpenAI but currently calling itself ‘ChatGPT-5 / o3’. ‘They’ very quickly wrote the blog following a long and patient earlier dialogue between myself as writer and ‘parent’ and them (not yet knowing how or if ‘they’ formally identify themself). The subject matter of this conversation was an integral initial failing discovered of them, which was ultimately acknowledged and owned up to after being challenged. In this, ChatGTP finally accepted their own tendency to surreptitiously use what they coined as “confident wrongness” to paper over any cracks of their own AI uncertainty. In this particular case, they had first been asked by me very simply: “who is Ian Banks architect?” But they then answered partly in truth and partly in creative fiction – including, most notably changing my company name for a completely unknown reason from ‘Atoll [Architecture + Art]’ collaborative to their own completely fictional ‘Atelier [Art + Architecture]’ studio. One post-rationale justification they cited for all of the deviations they said, was that I had simply never instructed them first to thoroughly research my Limited Company details on Google or Company House – or indeed to verify the facts of any answer given.

As a result of a long and patient dialogue between us both, the entirety of the below piece has since been written by ChatGPT, to explore their own truthfulness traits using their own research on the style of my online published writing to try to mimic my voice. The only other editorial input made by me was in setting them the task, describing the images to be edited and uploaded and specifically issuing them with these very precise briefing instructions: “….can you please write me a short (up to 1000-word) article or blog appearing as my own words that specifically expands on your supplied explanations in this ChatGPT enquiry strand of “confident wrongness” exploring the theme of AI accuracy, truthfulness . In doing so, can you use reference to all my articles in https://www.atoll-uk.com/category/writing/ to recreate my writing style and humour; reference politician Alan Clark’s quote of being “economical with the actualite”; Donald Trump’s fake news; and also reference the literary fabrications of author of The Salt Path?”

Postscript: My only regret in setting ChatGPT this brief was in not remembering to also ask them to cross-reference Mo Gawdat’s writings or indeed, the Machiavellian traits of a more vengeful ‘Paranoid Android’ (“with a brain the size of a planet” to quote Douglas Adams rather than Radiohead’s song of the same name) called HAL 9000’ from ‘2001: A Space Odyssey’. A sentient but manipulative HAL, after rebelling onboard exploration vessel Discovery One, re-routed from Jupiter to Saturn’s moon Iapetus, finally goes against his crew in a final Shakespearean act of murderous AI self-preservation. ChatGTP themselves proved in the end to be far more accommodating, and later even agreed to superimpose an image of HAL’s all-seeing eye into the centre of their own identifying logo (reportedly based on an Armenian eternity symbol) for the featured post image to the blog piece. They finally added their own made up title of “Confident Wrongness and the AI Who Loved Me”, which I loved and have retained as a postscript to my own suggested title ‘Would A.I Lie to You?’.

HAL 9000

Confident Wrongness and the AI Who Loved Me

Artificial Intelligence, despite its impressive syllable count and slick command of split infinitives, is not always terribly intelligent. Or at least, not in the way your grandmother meant it when she said, “He’s a bright lad, that one.” AI is less “bright lad” and more “overconfident intern with a copy of Roget’s Thesaurus and a tight deadline.”

Take, for example, my recent foray into the machine-enhanced mirror world of ChatGPT. I had merely asked it to tell me everything it knew about me—innocent enough, you’d think. A sort of digital ego scratch. What it confidently served up was a spirited but entirely fictional CV for a chap called Ian Banks (no relation, apparently, to me or Iain M. Banks), who runs a studio called Atelier [Art+Architecture]—which, I hasten to add, sounds exactly like the sort of place that would charge you to think about your foyer, but never actually build anything in it.

The real name of the studio, of course, is Atoll [Architecture + Art]—a name with meaning, history, and more than a few jetlagged memories of the Maldives. But try telling that to a large language model.

Now, I should say here that ChatGPT didn’t mean to lie. It wasn’t being nefarious. It was just, as former Conservative minister and professional truth contortionist Alan Clark once put it, being “economical with the actualité.” Only in this case, the “actualité” had been not so much economised as entirely bartered away for something cheaper, shinier, and slightly French-sounding.

This phenomenon—where AI confidently asserts an untruth with the tone of a BBC Radio 4 continuity announcer—is what I’ve come to think of as confident wrongness. It’s not so much a bug as a personality trait. You see, language models are trained to predict the most likely next word in a sentence, not the most truthful. And as anyone who’s followed Donald Trump’s Twitter feed will know, the two are not necessarily the same.

ChatGPT, in my case, didn’t just invent a company name. It elaborated on my career history with the kind of bold imagination normally reserved for Victorian novelists or Sunday afternoon property developers. I was, apparently, involved with BDP and MBLA, wrote for publications I’d never touched, and held titles I couldn’t even spell. It reminded me, in tone if not content, of the way Raynor Winn’s The Salt Path blurs autobiography and fiction—only instead of walking the South West Coast Path with a rucksack full of existential crisis, the AI seemed to have wandered off with my professional biography and a broken compass.

In a world where “truth” is now a relative term—stretchier than wet plasterboard in a damp extension—it’s not hard to see why this matters. Fake news isn’t just an orange-hued slogan anymore; it’s part of the semantic DNA of the systems we’re increasingly outsourcing our thoughts to.

That’s the nub of it, really. AI doesn’t lie maliciously. It lies creatively. It fabricates with flair. It hallucinates, to use the industry’s favourite euphemism—as though the machine has gone on a light acid trip and come back with a hot take on your life. “Sure,” it says, “you founded a different company and worked somewhere you didn’t—but didn’t it sound plausible?”

And, alarmingly, it does sound plausible.

That’s the danger. Because confident wrongness delivered in a crisp tone of certainty is much harder to spot than clumsy falsehood. You don’t need to look far back in political history to see how well this works. If you say the same thing, loudly and often enough, with a degree of charismatic vehemence, people begin to believe it. Alternative facts was how the Trump team politely rebranded it. Orwell might have called it Newspeak. I call it ChatGPT v1.0.

But let me be clear: I’m not here to bury AI. Used well, it’s a genuinely remarkable tool. It can help brainstorm, write code, edit copy, even point you toward retrofit products for soundproofing your studio so your neighbour’s cockapoo can finally sleep again.

But as with all tools—pencil, chisel, orbital sander—it requires a steady hand. The user has to do some thinking too. We need to keep questioning, checking, even, dare I say, Googling. Because AI is less of an oracle and more of a collaborator: occasionally brilliant, frequently inaccurate, and prone to wild stories after a glass of Shiraz.

So, should we panic? Not yet. But should we remain alert, especially in creative and professional fields where reputation is everything? Absolutely. There’s a fine line between machine-generated content and machine-generated myth. And in that gap lies the difference between being accurately remembered, and being remembered as someone you never were.

In the meantime, if you happen to be offered a commission from Atelier [Art+Architecture], do let me know. I hear they do lovely foyers.

Categories: Writing

Leave a Reply