Welcome to Deep Reading. I’m Lance Cummings from Cyborgs Writing, and today we’re exploring a question that might sound simple but the deeper you dig, the more complicated it gets.
What happens when AI gets “confused”?
I recently discovered a metric called semantic entropy.
Before your eyes glaze over at the word “entropy,” let me explain why its important.
Semantic entropy measures how much an AI’s responses vary in meaning when you ask it the same question multiple times.
High entropy means the model generates different meanings each attempt—it doesn’t have stable knowledge, so it improvises. Low entropy means consistent responses.
This is one reason why AI hallucinates.
For this podcast, I’m going to try to bring this concept down to earth and make it actionable through the eyes of ancient rhetoric.
From an ancient rhetoric perspective, high semantic entropy is when your AI model is walking through a house with no rooms.
Let me explain.
For those reading, this is a transcript of the podcast, which can be listened to above or in your favorite podcast player.
Recent research into semantic entropy
A few months ago, a paper came out in Nature called “Detecting hallucinations in large language models using semantic entropy.”
They had developed and refined a way to measure when AI seems confused, even when it sounds confident.
Here’s how it works.
You ask an AI the same question multiple times. For example, “What are the installation steps?”
And you get back five different answers. Now, those answers might use different words, but do they mean the same thing?
If answer one says “First, power down the system” and answer two says “Begin by turning off power”—that’s low semantic entropy. Different words, same meaning. The AI got the response right.
But if answer one says “power down first” and answer three says “leave power on during installation,” then you’ve got high semantic entropy. The meanings contradict, and the AI is improvising. It probably isn’t building its answer on solid information.
This happens even when the AI seems confident. It’s not hedging with “maybe” or “possibly.” It’s just... making stuff up to fill the gap.
The researchers showed that semantic entropy can predict hallucinations with pretty good accuracy. When entropy is high, you’re about to get unreliable information.
Why this matters
Now, you might be thinking, “Okay, that’s interesting from a computer science perspective. But I’m a writer, a professor, a content developer. What does this have to do with me?”
Everything.
Because while this semantic entropy research is newer, a broader principle has been established across other studies: how you structure source content directly affects AI performance.
Research on RAG systems, or the technology most organizations use for AI-powered search and question-answering, shows that chunking strategy can impact performance as much as or more than the choice of AI model itself.
Think about what causes high entropy. The AI generates variable meanings because it doesn’t have stable grounding in what the source material actually says. In a way, it’s uncertain or guessing.
And what causes that “uncertainty”? The research suggests it’s often the source material. When documents are poorly organized, the AI does what a confused human reader would do. It fills gaps. Makes assumptions. Creates different interpretations.
The semantic entropy metric gives us a way to measure this instability. But the underlying principle isn’t new: structure matters for machine comprehension just like it matters for human comprehension.
I should add a note here. The AI model isn’t actually getting uncertain and that’s really part of the problem. The knowledge it’s working from is uncertain, but the AI is trained to be confident, and in the end, that’s what causes semantic entropy.
So you need confident information or knowledge behind your model to match the actual training to be confident.
Why Ancient rhetoric is still important
This problem isn’t new.
Ancient rhetoricians figured this out thousands of years ago.
They had to create speeches on the fly. In the Assembly, in the courts, at public ceremonies. No time to prepare.
Just, “here’s your topic, now speak.”
How did they do it? They used something called topoi.
The word literally means “places” or “rooms.” They organized their knowledge like a house with clearly labeled rooms.
Need to define something? Go to the definition room.
Need to compare two things? The comparison room.
Need to trace cause and effect? That room.
Having these stable mental spaces, or patterns, meant they could reliably find what they needed and construct coherent arguments quickly.
In 1984, Carolyn Miller wrote what became one of the most cited paper in rhetorical studies, called “Genre as Social Action.” And she argued that this is how all communication works. We recognize recurrent situations, and we reach for typified patterns of response.
When the situation recurs in recognizable form, we know what to do. We have stable knowledge structures to draw from.
When it doesn’t, we improvise. We hedge. We contradict ourselves across attempts.
Topoi in machine rhetorics
High semantic entropy is the computational version of lacking stable topoi.
When you ask an AI the same question multiple times and get semantically different answers, the model is doing exactly what a rhetor without proper topoi would do. It’s improvising under uncertainty. It lacks the organizational patterns, or the “rooms,” where specific types of knowledge reliably live.
But … you can create those rooms through content structure.
When you write a procedure with clear steps, properly labeled, you’re creating the “procedure room.”
When you write a concept explanation with a definition, characteristics, and examples in consistent order, you’re creating the “concept room.”
When you use consistent terminology throughout, you’re making sure the rooms have clear labels.
This is what structured content does. It is creating stable topoi for machines.
Low semantic entropy means the AI knows which room it’s in and what that room contains. It’s not guessing. It has reliable patterns to draw from.
What does this mean for you?
So what do you do with this?
First, understand that structure isn’t just about making content look organized. Structure is a signal. It’s how you communicate to both humans and machines.
“This is what kind of information this is, and here’s how to use it.”
Second, recognize that the same principles that help human readers help AI systems. Clear headings. Focused chunks. Consistent terminology. Explicit organization.
Third, start thinking of yourself not just as a writer but as an information designer. Your job isn’t just to explain things clearly. It’s to create reliable knowledge structures that work across contexts, including computational ones.
The content professional who understands this is going to be incredibly valuable as AI becomes more central to how information gets used.
Challenge
So here’s my challenge to you: Next time you’re creating content, ask yourself: Am I building a house with clearly labeled rooms? Or am I creating an unmarked space where readers, human or machine, have to guess what goes where?
Because high semantic entropy isn’t just an AI problem. It’s a content problem.
And content problems? Those are solvable.
Are you wondering how we might test for semantic entropy. Well, stay tuned. More on that soon!
Until then, I’m Lance Cummings. Keep reading deeply.
And if you’re testing this stuff in your own work, I want to hear about it. Find me on LinkedIn or drop a comment on the newsletter.
Talk to you next time.











