A funny thing happens when you talk to an AI. It doesn’t matter how sophisticated the system is—ChatGPT, Claude, Gemini, whatever—the same uncanny feeling returns. It sounds smart. Sometimes astonishingly so. It can mimic a scientist, a seventh grader, a therapist, or a pirate. It tells jokes, follows instructions, and writes sonnets. But push it even slightly out of alignment, and the illusion slips. It forgets what you just said. It invents books and authors and confidently tells you they exist. It creates a quote, puts it in quotation marks, and attaches a source that never published it. The voice is persuasive, the content fabricated. The performance is faultless. The understanding is non-existent; a truth is not just a matter of correct information. It’s contextual, lived, and interpreted. When we say someone “understands” something, we don’t just mean they can repeat it or fit it into a logical framework. We suggest they grasp its significance, cause and effect, emotional weight, and place in the world. Truth, for humans, is layered: it’s the factual (the earth orbits the sun), the experiential (grief feels like a fog), and the ethical (you shouldn’t lie even if it’s convenient). Our understanding of truth emerges from this interplay. It’s a child’s understanding of the difference between “technically true” and “really true.”
That’s why real comprehension depends on more than just data. We understand things because we encounter them, fail at them, test them, and live through them. You don’t understand love because you read about it; you know it because it breaks and rebuilds you. You don’t understand justice because you passed a test on political theory. You know it because you’ve been mistreated or watched someone else suffer while the system looked away. Human understanding is truth shaped by consequence.
This is what artificial intelligence lacks—not just cognition, but consequence. It has no stake in the truth. It doesn’t care if its words heal or harm. It doesn’t live in the world, and so it can’t know in the way we know. It doesn’t learn from error, only from correction. It doesn’t feel the moral gravity of getting something wrong. It can’t. And so while it can simulate knowledge, it can never fully inhabit the truth. That’s still our burden and our responsibility.
The mistake, I think, is our own. We want AI to be like us. We’ve built systems that write like us, talk like us, sound—on a good day—like the cleverest person in the room. And so we assume they must be learning like us, thinking like us, understanding like us. But they’re not. Not even close.
To understand what’s roing on, you have to start with the gthing’s guts Large Language Models (LLMs), like the one I’m using to write this sentence, are trained to predict the next word in a sequence based on everything that came before. That’s it. They’re not reasoning from first principles. They’re not checking their facts. They’re not even “retrieving” information the way you or I would look something up. They are mathematical engines of probability—astonishingly powerful—but still just statistical guesswork refined to an art.
And that word, “refined,” gets at the heart of the confusion. Because they’ve been trained on so much text—billions of pages of books, articles, code, Wikipedia entries—they have access to far more language than any human could. They’ve seen every writing style, every sentence structure, every phrase. That’s why they can talk so fluently. But they’re not fluent in meaning. They’re fluent in form. They’re trained not to understand but to approximate the patterns of understanding. If you ask them a question, they generate an answer that looks like an answer someone might give. It may be true. It may be false. That’s not the point. The point is whether it fits the pattern.
Which brings us to hallucination. In AI, “hallucination” is then the system confidently generates falsehoods—lies, basically, without the awareness that they are lies. Ask ChatGPT who discovered the theory of dynamic canonicity in biblical theology, and it might tell you Harold Coward did, citing a fictional book he never wrote. Ask it for a scientific paper about surgery using churros, and it may describe one, complete with fake journal title, authors, and plausible-sounding results. The system doesn’t know these things are untrue. It doesn’t know anything. It is simply playing a game of next-word prediction, and the result is sometimes truth-like nonsense delivered in the key of confidence.
We’ve built the best bullshitter in human history.
Eric Holloway, a computer scientist who’s written extensively on the topic, describes how this process also breaks down in visual generation,. When an AI is asked to draw a hand, it doesn’t start with the concept of “hand.” It doesn’t think about anatomy or the function of fingers. It doesn’t know how many fingers a human has. Instead, it assembles parts of “finger-like” and “palm-like” segments and arranges them until the probabilities are satisfied. Often, the result is grotesque: extra fingers, fused joints, hands emerging from elbows. And yet the system believes—if it could believe—it has succeeded.
The same goes for text. AI can write coherent paragraphs and even entire essays, but hallucinations and repetition become more common as the output stretches longer. That’s because the system isn’t holding ontentral argument or goal. It isn’t working toward a thesis. It’s just stringing plausible sentences together until it runs out of momentum or wanders off-course. It has no “big picture” because it has no picture at all.
This is also why AI can’t “learn” in the way we do. Once a language model is trained, the learning stops. You can’t tell it something new and expect it to remember it next time. You can’t correct it and expect it to do better next time. That’s not how it works. The “P” in GPT stands for “pre-trained.” It means the model’s behavior is largely fixed. It can be fine-tuned with more training or plugged into an external memory system that acts as a workaround, but the model itself isn’t evolving through conversation. It doesn’t learn from prompts. It doesn’t grow from feedback. It doesn’t get smarter over time.
There are practical consequences to this. One is that people who use AI as a source of truth—especially in fields like law, medicine, or science—are playing with fire. Lawyers have already submitted court filings citing fake cases generated by ChatGPT. Researchers have published papers with fabricated citations. A Canadian airline was even ordered to pay damages after its chatbot invented a non-enonexistentnd policy and misled a customer. These are not glitches in a system trying its best. They are the logical outcomes of a system that never knows what it’s doing in the first place.
So what do we do with that knowledge?
First, we stop pretending that AI understands us or the world. We stop asking it to do things it was never built to do: judge, evaluate, and decide. It can help us write, summarize, translate, and structure ideas, but the thinking part, the truth-seeking part, is still up to us.
Second, we learn how to prompt better. Because AI doesn’t learn, we have to. That means crafting inputs carefully, giving them the proper scaffolding, checking every output, and verifying facts. If that sounds like work, it is. These systems aren’t magic; they’re tools. And powerful tools often need careful handling.
And third, maybe most importantly, we get better at distinguishing fluency from understanding. Just because something sounds right doesn’t mean it is right. In fact, the more convincing the tone, the more skeptical we should be. The danger isn’t that AI will replace human thought. The danger is that we will forget how to think critically because the machine sounds like it already has.
AI doesn’t know what’s true. It only knows what sounds like truth. That might be good enough for parlor tricks and pitch decks. But when it comes to what really matters—justice, science, education, democracy—we’d better remember that sounding right and being right are not the same thing. And never have been.
Discover more from Random Ramblings by Robert Steers
Subscribe to get the latest posts sent to your email.