Don't Just Prompt Engineer. Start Building Taxonomies.
How I'm using taxonomies and machine rhetorics to build AI content operations in the classroom

Here’s a test case that reveals why casual AI use too often fails: disaster communication.
Imagine building a chatbot to help communities understand food scarcity risks during emergencies. You feed it information about food security, emergency preparedness, community resources.
People ask questions. AI generates responses.
When you test it with real life scenarios, it might seem to work well enough … but then breaks down under scrutiny:
“Should I eat meat that’s been in a refrigerator for one day without power?”
“How do I know if my community is at high risk?”
“What should I do if I can’t afford to prepare?”
The AI may generate perfectly grammatical sentences and sound authoritative. It might even cite statistics. But the responses can be generic, often irrelevant, and occasionally dangerous—exactly what you can’t afford in crisis communication.
This isn’t a hypothetical problem. I’m working on this with my students this semester. But the reason it matters for content professionals and writers generally has nothing to do with disaster communication specifically.
It matters because when AI absolutely has to be reliable, the same fundamental limitations appear every time. And too many people (and maybe organizations) are skipping the systematic work needed to address them.
Combining a machine rhetorics framework with user research is key to improving AI performance in real life situations.
A machine rhetorics framework becomes more valuable, not less, as AI capabilities improve. Because the stakes get higher when people trust AI more.
“Reasoning Models” Still Don’t Reason
AI doesn’t reason. It pattern-matches.
You might be thinking: “But what about the new reasoning models? Aren’t they moving beyond pattern-matching?”
It’s true that newer models show impressive capabilities that can break down complex problems, check their own work, and produce more consistent outputs. Some people argue this means we’re past the need for more systematic approaches to AI collaboration.
I’m skeptical.
Even with extended reasoning capabilities, these systems still operate on statistical patterns, not genuine causal understanding.
They’re better at appearing to reason, which makes their failures less obvious but not less consequential.
Even if reasoning models reduce some failure rates, they don’t eliminate the need for structured approaches when accuracy matters.
A more sophisticated pattern-matcher still benefits from well-structured prompts, organized content, and explicit knowledge mapping … just like a more powerful search engine still needs well-structured information architecture.
The machine rhetorics framework becomes more valuable, not less, as AI capabilities improve. Because the stakes get higher when people trust AI more.
AI doesn’t reason. It pattern-matches.
When you ask ChatGPT about food scarcity during disasters, it’s recognizing patterns from millions of texts where those words appeared together. It’s predicting what words typically follow other words in similar contexts.
What it’s not doing?
Understanding cause-and-effect relationships
Making logical connections between concepts
Recognizing when general advice doesn’t apply to specific situations
Knowing why certain information matters more than other information in emergency contexts.
This is why AI often produces responses that sound right but fall apart under scrutiny. The patterns are there. The reasoning isn’t.
From a practical standpoint, you can’t just “talk to AI” and expect sophisticated results, any more than you could walk into a library and expect books to organize themselves for your research project.
AI needs architecture. It needs structure. It needs humans to provide the logical scaffolding that helps it move from pattern recognition toward contextually appropriate responses.
This is where most casual AI use breaks down, and it’s why content professionals and other writers have an advantage most people don’t realize yet.
The Three-Part Framework
The disaster communication problem reveals three distinct challenges that too many people treat as one problem. Understanding machine rhetorics helps break down these problems and build solutions that are human-based.
AI doesn’t know what you actually need from a response (versus what it can plausibly generate).
AI can’t distinguish what’s relevant when everything’s mixed together.
AI can’t reason about logical relationships—it can only recognize patterns.
Each limitation requires a different solution. Each solution builds on the previous one. And you can’t skip steps.
Machine rhetorics means applying rhetorical principles to AI systems the same way content professionals apply them to any information experience:
start with users,
understand their needs through research, then
structure systems around those insights.
Understanding machine rhetorics helps break down these problems and build human-centered solutions through strategic prompt design, structured content development. and rhetorical knowledge mapping.
This is what I’m exploring with students this spring, but the framework applies wherever content professionals need AI to be genuinely reliable.
Let me show you what each component does using disaster communication as the test case.
1. Strategic Prompt Frameworks
The instinct is to build a chatbot by dumping information into it and hoping for the best.
Designing information experiences is about whether information is findable, understandable, and actionable for the right people, in the right place, at the right time.
Someone asks, “What should I eat if my power’s been out for a day?” and the AI generates a response pulled from its training data.
The problem? That response might be generically accurate but miss everything that matters about the person asking.
This is a usability problem and needs usability frameworks to solve. When content professionals design information experiences, its more than just getting users to click the right buttons.
Designing information experiences is about whether information is findable, understandable, and actionable for the right people, in the right place, at the right time.
Ultimately, an AI chatbot is an information interface, and it can fail for the same reasons any interface fails. It wasn’t designed around how real users actually encounter and act on information.
That’s why we started with user research in our disaster communication project. Last semester, our students conducted interviews, usability tests, and card-sorting exercises with the target audience for a disaster food security chatbot.
This semester, a new group of students is using those findings to develop test user questions and structured content for the AI system. The usability insights from that research are what will shape how we build the prompts.
Generic prompts produce generic answers. But when you structure the system around what usability research has revealed about real users, the AI has something meaningful to work with.
Here’s the difference. A basic system instruction might say:
You are a helpful assistant that answers questions about food safety during disasters.
A system prompt grounded in usability research looks more like this:
Task: Answer questions about food safety during power outages, prioritizing actionable steps the user can take right now with what they have available.
Here is some context from research that might be added to the system prompt:
Users are often college students with limited storage, limited transportation, and tight budgets. They need to know what to do with the food they already have, not what they should have bought in advance.
Users skip overly official language. Usability testing showed they abandoned documents that read like government pamphlets. They responded to conversational, direct guidance.
A common misconception from card-sorting data: users grouped “frozen food” and “refrigerated food” together, not realizing they follow different safety timelines during outages. Responses should proactively clarify this distinction when relevant.
Now when someone asks “Is the chicken in my fridge still safe?” the AI can respond with specific guidance calibrated to the user’s actual situation — not a generic food safety lecture.
And the reason it can do that isn’t because someone wrote a clever prompt. It’s because usability research identified what users need, how they process information, and where their mental models diverge from expert knowledge.
The difference isn’t prompt length. It’s that the prompt is structured around a usability framework content professionals already work with:
audience analysis drawn from real research,
purpose defined by actual user needs, and
content organized to match how people actually encounter problems rather than how experts categorize them.
This same rhetorical approach needs to be applied when building the information and content AI draws on in its responses.
2. Structured Content Development
Even well-structured prompts fall short when you feed AI disorganized information or rely on its black boxed training.
You might have:
Research about food scarcity
Statistics about emergency food systems, and
Interview data from community organizations.
But if it’s all jumbled together, AI can’t tell what matters when, or how different pieces of information relate to each other, or what readers need to know before they can understand something else.
The solution is organizing materials using what content professionals call information types:
Reference information: Basic facts readers need to understand the situation. What counts as food insecurity? What do terms like “food desert” actually mean? What are current statistics?
Concept information: Frameworks for understanding the problem. How do sociologists think about food systems? What models help explain why communities experience food scarcity? How do emergency management professionals analyze risk?
Principle information: Cause-and-effect relationships that explain how things work. Why do food supply chains fail during disasters? What factors determine community resilience? How does economic stress affect food access?
Process information: How things unfold over time. What’s the typical progression of food scarcity during emergencies? How do communities respond? What happens when intervention comes too late?
Task information: Specific actions people can take. How should individuals prepare? What should community organizations do first? When should people seek additional resources?
When you organize information this way, two things happen.
First, you understand your own materials better. You can see gaps in your knowledge and recognize which types of information you’re missing entirely.
Second, when you structure this organized content into AI interactions, responses become more targeted, more appropriate to specific situations, and actually useful for different kinds of questions.
This is the second component: organizing information systematically so both humans and AI can understand how different pieces relate to each other and serve different purposes.
That’s what my students will be working on soon.
You can get a sneak peak at the taxonomy we are using in my Content Lab
3. Rhetorical Knowledge Mapping
Strategic prompts and structured content get you far, but they still leave a gap: AI can’t reason about relationships between concepts unless you map them explicitly.
When someone asks “Is the chicken in my fridge still safe?”, the answer depends on understanding how factors connect: How long has the power been out? Was the chicken frozen or refrigerated? Has the fridge stayed closed? What’s the safe timeline for this food type?
These aren’t isolated facts. They’re part of a reasoning structure where one decision point leads to another. AI can’t extract these logical relationships reliably from unstructured text alone.
This is where knowledge graphs come in—not as a technical database concept, but as a systematic way to map the reasoning structure that underlies domain expertise.
Our project taxonomy (developed from Fall 2025 user research) organizes disaster food security content into domains and topics: Food & Hydration, Water Safety, Power & Utilities, each with specific subtopics like “Refrigerator/freezer safety during outages.”
A taxonomy tells you what exists in the domain. A knowledge graph tells you how those things relate.
You identify the discrete concepts (refrigerated food, power outage, time thresholds, student constraints), then map the relationships between them: what’s safe for how long under which conditions, what constraints affect which options, what triggers what timeline.
The graph also captures context from user research, for example students confuse frozen and refrigerated timelines, trust action-oriented guidance, and face transportation and storage constraints. These rhetorical insights shape how relationships get structured.
With this mapped, AI can trace a path from the user question → refrigerated food entity → time without power → safety threshold → a more specific response about food.
Building a knowledge graph is about choosing which distinctions matter for this audience, which relationships drive decisions, which concepts need disambiguation, and which context shapes interpretation.
It’s rhetorical.
Content professionals already make these decisions when organizing information. The knowledge graph makes those decisions explicit and machine-readable so AI systems can reason with them.
How They Build On Each Other
You can’t skip steps in this progression, and disaster communication shows why with unusual clarity.
Without strategic prompt frameworks, you’re hoping AI will guess what constitutes appropriate crisis communication. Sometimes it guesses right. Maybe even most of the time. But what if it doesn’t? The consequences matter.
Without structured content, even well-framed prompts produce generic responses because AI can’t distinguish what’s relevant when facts, theories, procedures, and definitions are jumbled together. In crisis communication, generic responses can cause harm.
Without knowledge maps, AI can’t understand logical relationships, cause-and-effect connections, or contextual constraints that determine when information applies versus when it doesn’t. It will confidently generate plausible-sounding advice that violates the actual constraints governing emergency response.
But when you address all three challenges systematically, something shifts. You’re not just “using AI.” You’re designing information systems that help AI perform better while developing your own analytical capabilities.
The disaster communication scenario makes failures visible because stakes are high. But the same three limitations show up everywhere content professionals need AI to be reliable:
technical documentation where incorrect instructions cause problems,
compliance content where errors create liability, and
strategic communication where off-brand messaging damages reputation.
The framework is the same. Only the domain changes.
Can you design systematic approaches to AI collaboration that leverage your rhetorical expertise while developing capabilities AI doesn’t possess? That’s the real question.
What This Means for Content Professionals
True AI collaboration develops systematic approaches to these three distinct challenges—strategic framing, content organization, and knowledge architecture.
That’s why machine rhetorics is more than just prompt engineering.
It is a way of thinking that focuses on audience and context beyond the interface or immediate chat. Its a way of thinking that most content professionals already do.
This is why the workflow mapping advantage I wrote about last week matters. You can see how your work actually gets done.
Can you design systematic approaches to AI collaboration that leverage your rhetorical expertise while developing capabilities AI doesn’t possess? That’s the real question.
The Writing with Machines course teaches this framework—not just the three components, but how to apply them across different professional contexts where AI needs to be reliable rather than just plausible.
I’m testing it this spring in one of the most demanding contexts possible: crisis communication where generic responses can cause real harm. But the framework works anywhere content professionals need systematic AI integration—documentation, strategic communication, research synthesis, content operations.
If you’re interested in learning to apply this in your work, the beta course is opening up next week for paid subscribers!



