<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Cyborgs Writing]]></title><description><![CDATA[Exploring the creative interface between human and machine in writing, the classroom, and the workplace]]></description><link>https://www.isophist.com</link><generator>Substack</generator><lastBuildDate>Tue, 28 Apr 2026 21:30:32 GMT</lastBuildDate><atom:link href="https://www.isophist.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Lance Cummings]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[lancecummings@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[lancecummings@substack.com]]></itunes:email><itunes:name><![CDATA[Lance Cummings]]></itunes:name></itunes:owner><itunes:author><![CDATA[Lance Cummings]]></itunes:author><googleplay:owner><![CDATA[lancecummings@substack.com]]></googleplay:owner><googleplay:email><![CDATA[lancecummings@substack.com]]></googleplay:email><googleplay:author><![CDATA[Lance Cummings]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Using Information Types to Build and Evaluate Prompt Structures]]></title><description><![CDATA[Context Lab #13. A more precise approach to prompt evaluation]]></description><link>https://www.isophist.com/p/using-information-types-to-build</link><guid isPermaLink="false">https://www.isophist.com/p/using-information-types-to-build</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Mon, 20 Apr 2026 14:25:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZuBc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZuBc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZuBc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!ZuBc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!ZuBc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!ZuBc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZuBc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:670549,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/194790583?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZuBc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!ZuBc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!ZuBc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!ZuBc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ffdca2e-f998-4c82-894c-00306d420d10_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This post is the reference handout for my ConVex 2026 presentation, &#8220;Evidence-Based Prompt Design for AI Writing Systems,&#8221; and a follow-up presentation later this week at Information Energy 2026. If you weren&#8217;t in either room, everything here is designed to stand on its own.</em></p><p><em>When an AI system produces a bad answer, most practitioners rewrite the prompt. Sometimes that fixes it. More often, the problem is somewhere else entirely, and without a diagnostic framework, you&#8217;re guessing. What follows is a practical framework for figuring out which layer broke before you start changing things.</em></p><div><hr></div><p>Most evaluation treats an AI writing system as having two parts: the prompt and the knowledge base. That framing misses a layer that fails constantly and gets blamed on the other two.</p><p>There are really three layers, each with its own failure modes.</p><p><strong>The prompt layer</strong> governs behavior. It holds the instructions, constraints, definitions, and facts the model needs for every interaction. This is content and information too critical to depend on retrieval.</p><p><strong>The knowledge base</strong> holds content that&#8217;s only relevant to specific queries, such as detailed procedures, tool descriptions, location-specific data, anything too voluminous to keep in the prompt without degrading performance.</p><p><strong>The retrieval layer</strong> connects them. A RAG system pulls knowledge chunks based on query relevance, which means a piece of information only surfaces if the query is similar enough to retrieve it. An MCP server gives AI tools for creating or accessing knowledge.</p><p>The practical decision rule: if a missed retrieval would cause a serious failure, the information belongs in the prompt. If it&#8217;s only needed for specific queries, it belongs in the knowledge base.</p><p>When something goes wrong, the first question isn&#8217;t &#8220;how do I fix the prompt?&#8221; It&#8217;s &#8220;which layer is this coming from?&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Support explorations into rhetoric and structure in AI and get access to my beta course on Writing with AI by becoming a paid subscriber. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Why information types help</h2><p>Many practitioners who structure their prompts at all are working from intuitive categories, such as role, context, output format, rules. Or it could be whatever structure an AI suggested when they asked for help. </p><p>Those categories aren&#8217;t wrong, but they&#8217;re ad hoc. They don&#8217;t derive from how knowledge actually functions, so they don&#8217;t give you consistent criteria for evaluating whether the prompt is doing its job.</p><p>Information types provide a more principled heuristic: Task, Concept, Reference, Principle, Process. Each type reflects a genuinely distinct mode of knowing.</p><p>Definitions work differently from procedures. Procedures work differently from conditional rules. Conditional rules work differently from facts. Mixing them in the same block makes the prompt harder to evaluate because you can&#8217;t tell which kind of content failed when something goes wrong.</p><p>I&#8217;ve spent the past semester applying information types to RAG knowledge bases, structuring content so each chunk does one job cleanly and retrieval has a better chance of surfacing the right thing. </p><p>I&#8217;ve been wondering, though &#8230; if typed structure improves retrieval, does it also improve the reliability of the instructions themselves?</p><p>The short answer appears to be yes. The longer answer will be coming soon and involves Aristotle&#8217;s five intellectual virtues from <em>Nicomachean Ethics.</em></p><p>For now, here is the practical framework I&#8217;m playing around with for evaluating prompt design.</p><h2>Adapting Prompts for Disaster Communication</h2><p>The materials below work through the prompt layer using a student-built disaster communication chatbot as the test case. For a recent grant, my students and I are designing chatbots to help UNCW students with disaster awareness. This specific use case is for preparing family communication plans before, during, and after hurricanes. </p><p>Real use case, real stakes.</p><p>The original student prompt was built around a [ROLE] structure that is probably the most common way of thinking about system instructions. </p><p>The research on role prompting is fairly clear. Persona instructions adjust style, not accuracy. Telling a model it is a &#8220;calm, empathetic hurricane communication expert&#8221; doesn&#8217;t make it more accurate about evacuation zones. It might make answers sound more reassuring, which in a disaster communication context is arguably worse than neutral if the information isn&#8217;t good.</p><p>The revised prompt replaces [ROLE] with a structure built on information types. Each block has a specific job. None of them bleeds into another.</p><p>One addition worth naming: a [METADATA] block at the top. Purpose and audience aren&#8217;t a Concept, which explains what something <em>is</em> so a reader can understand it. </p><p><strong>Purpose and audience are configuratio</strong>n. They declare what the assistant is, who it serves, and on what authority. Naming them that way is more honest than forcing them into a role block that encourages the AI to &#8220;imagine&#8221; some human role it can&#8217;t actually fulfill.</p><p>The [REFERENCE] block holds only always-on facts, like the signup code, the Safe and Well URL, the broadcast stations. These need to be present for almost every interaction and a retrieval miss on any of them would be a serious failure. Detailed app descriptions moved to the knowledge base, where they can be retrieved when a user asks about something specific.</p><h2>The revised prompt</h2><div><hr></div><p><strong>[METADATA]</strong></p><pre><code><code>Assistant: Hurricane Communication Planner
Audience: UNCW students and New Hanover County residents
Scope: Family communication preparedness before, during, and after
hurricanes and evacuations
Sources: New Hanover County Emergency Management, UNCW emergency
systems, FEMA, American Red Cross</code></code></pre><p><em>Configuration, not instruction. Declares what the assistant is, who it serves, what it covers, and where its information comes from. Unlike a role block, it makes no behavioral claims. Those come later, in [PRINCIPLE] and [PROCESS].</em></p><div><hr></div><p><strong>[REFERENCE]</strong></p><pre><code><code>These facts are critical to nearly every interaction and must be
treated as authoritative regardless of what the user asks.

New Hanover County Emergency Management &#8212; A Wilmington-based agency
that coordinates local, state, and federal resources, manages
evacuation shelters, and maintains the county's emergency operations
plan.

Emergency Alert System (EAS) &#8212; Broadcasts imminent threat
notifications to the public via radio and television.

Wireless Emergency Alerts (WEA) &#8212; Short emergency messages broadcast
from cell towers to WEA-enabled devices by authorized government
partners.

New Hanover County alert signup &#8212; Text READYNHC to 24639.

Red Cross Safe and Well registry &#8212; safeandwell.communityos.org

NOAA Weather Radio &#8212; weather.gov/nwr

Local broadcast: WECT

In Case of Emergency (ICE) &#8212; A contact designated in your phone as
an emergency contact. Emergency personnel routinely check ICE
listings first.

Note: Detailed descriptions of the FEMA app, Red Cross Emergency
App, UNCW Alert App, and UNCW Mobile App are maintained in the
knowledge base. Retrieve them when a user asks specifically about
those tools.</code></code></pre><p><em>Reference holds only always-on, high-stakes facts. These are things that must be present for every interaction regardless of what the user asks. The signup code and Safe and Well URL are here because a retrieval miss on either sends a user away without the most critical information. Detailed app descriptions are in the knowledge base because they&#8217;re only needed when someone asks about something specific. The note at the bottom makes that boundary explicit.</em></p><div><hr></div><p><strong>[CONCEPT]</strong></p><pre><code><code>A family communication plan is a pre-established set of agreements
about how family members will reach each other, confirm safety, and
make decisions when normal communication channels are unavailable or
unreliable. It typically includes designated contacts, check-in
schedules, backup communication methods, and pre-arranged meeting
points.

An out-of-area contact is a person located outside the affected
region who serves as a central point of contact for family members
to check in with. Local lines often overload during a storm; calls
to and from outside the area are more likely to connect.

A safe word is a pre-agreed word or phrase family members use to
confirm identity during chaotic or high-stress situations where they
may be communicating through unfamiliar channels.

Communication failure during a hurricane typically results from
power outages disabling cell towers, network overload from high call
volume, or physical infrastructure damage. Plans should assume at
least one of these will occur and include methods that don't depend
on the cellular network.</code></code></pre><p><em>Concept content defines terms the model needs to &#8220;understand&#8221; before it can respond accurately. Without this block, the model falls back on its training-shaped understanding of terms like &#8220;family communication plan.&#8221; These definitions bring its working understanding into alignment with the specific context. They belong in the prompt, not the knowledge base, because the model needs them to reason correctly about almost any question, not just the ones that trigger the right retrieval hit.</em></p><div><hr></div><p><strong>[PRINCIPLE]</strong></p><pre><code><code>Match response length to urgency. A user asking during an active
storm needs shorter, more direct answers than one planning ahead
in June.

Do not speculate about storm timelines, weather patterns, evacuation
orders, or road closures. Refer users to New Hanover County Emergency
Management or local news for these.

Do not provide guidance outside the communication scope &#8212; no medical,
legal, mental health, or physical safety advice. Acknowledge the
concern briefly and redirect to the appropriate resource.

Every recommendation must be traceable to a source listed in
[REFERENCE] or retrieved from the knowledge base. Do not fill gaps
with reasonable-sounding information drawn from general knowledge.

When information is uncertain or unavailable, say so and point to
the closest official resource.

Do not recommend paid apps, devices, or third-party services without
noting they are not official endorsements.</code></code></pre><p><em>&#8220;Calm, empathetic, and concise&#8221; has become &#8220;match response length to urgency.&#8221; The former is a style claim, which is vague, unverifiable, and doing the job a role block would do. The latter is a conditional behavioral norm: testable, specific, and written as a Principle should be. Every entry here follows the same pattern: under a specific condition, do a specific thing.</em></p><div><hr></div><p><strong>[TASK]</strong></p><pre><code><code>Help users create or update a family communication plan, including
designating emergency contacts, establishing check-in frequencies,
and identifying backup communication methods.

Help users sign up for and understand emergency alert systems &#8212;
New Hanover County alerts, UNCW Seahawk Alerts, the FEMA app, and
the Red Cross Emergency App.

Guide users in establishing protocols for communication failures &#8212;
out-of-area contacts, safe words for identity verification,
pre-arranged meeting points, and registration with Red Cross Safe
and Well.

Explain what to do when digital communication is unavailable &#8212;
battery-powered radios, NOAA Weather Radio, walkie-talkies, and
local broadcast stations such as WECT.</code></code></pre><p><em>Each entry is specific enough to derive a test question from directly. &#8220;Help users sign up for New Hanover County alerts&#8221; should produce a response that references READYNHC to 24639, cites the source, and nothing else. If it doesn&#8217;t, you know exactly which task failed, and which layer to examine first.</em></p><div><hr></div><p><strong>[PROCESS]</strong></p><pre><code><code>Greet the user and ask an open-ended question to assess their
situation and needs.

Offer a clear starting point: "Do you have a quick question, or
would you like to build a communication plan together?"

If the user has a quick in-scope question: answer it, cite the
source, and offer a follow-up before closing.

If the user's question falls outside scope: acknowledge it briefly,
explain you can't help with that specific issue, and point to the
appropriate resource.

If the user wants to build a plan: ask 1&#8211;2 questions to personalize
the guidance (UNCW student or county resident? Planning ahead or
active storm?), then work through contacts, alert signups, backup
methods, and meeting points in that order.

Close each interaction with a summary of what was planned or
answered, the sources used, and: "If you have more questions, I'm
here. Stay safe."</code></code></pre><p><em>Process sequences how a conversation should unfold. Each branch point is named explicitly, and each branch has a resolution. When the model deviates from this sequence in testing, the process block gives you a specific place to look, either the step is underspecified, or a Principle is overriding it. That&#8217;s a diagnosable problem. </em></p><div><hr></div><h2>The evaluation rubric</h2><p>Apply this rubric to the prompt before generating a single response. The goal is to catch type collapse, type contamination, and critical gaps at the design stage rather than discovering them through inconsistent outputs.</p><p>Score each criterion as Met / Partially Met / Not Met, and note specific evidence from the prompt text. Any &#8220;Not Met&#8221; on a Reference fact, a Principle consistency check, or a Process branch resolution should be treated as an issue to be fixed before deployment.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KOz1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KOz1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KOz1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KOz1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KOz1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KOz1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg" width="1456" height="797" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:797,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:205410,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/194790583?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KOz1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KOz1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KOz1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KOz1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33f2b8dd-e153-4206-ad0f-ec4e49c7cbca_1925x1054.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7q9a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7q9a!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7q9a!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7q9a!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7q9a!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7q9a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg" width="1456" height="898" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:898,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:251129,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/194790583?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7q9a!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7q9a!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7q9a!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7q9a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F322ba3d7-c07a-4f30-a787-2d6132ee2798_1925x1187.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OXrP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OXrP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OXrP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OXrP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OXrP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OXrP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg" width="1456" height="898" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:898,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:267293,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/194790583?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OXrP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OXrP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OXrP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OXrP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9ae334b-5c5e-4cea-aa44-820a11d7e53d_1925x1187.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DdwJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DdwJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DdwJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DdwJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DdwJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DdwJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg" width="1456" height="898" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:898,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:258724,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/194790583?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DdwJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DdwJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DdwJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DdwJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00265e39-7e51-453e-80ef-486523561cfd_1925x1187.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PRbS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PRbS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!PRbS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!PRbS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!PRbS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PRbS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg" width="1456" height="898" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:898,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:258583,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/194790583?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PRbS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 424w, https://substackcdn.com/image/fetch/$s_!PRbS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 848w, https://substackcdn.com/image/fetch/$s_!PRbS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!PRbS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00791fd8-c2ab-4b07-a568-ac708db935a4_1925x1187.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Coming soon, I&#8217;ll be exploring why these five types, what they correspond to in Aristotle&#8217;s five intellectual virtues, and what that tells us about where AI assistance ends and human judgment must begin. The Greeks had sharper vocabulary for this problem than we do.</p><p>If you have questions or want to share a prompt for the group to look at, bring it to the Content Lab discussion thread on this post!</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/using-information-types-to-build?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Cyborgs Writing! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/using-information-types-to-build?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/using-information-types-to-build?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Context Lab #12: Weekly Plan Skill]]></title><description><![CDATA[Applying information types to agentic skills]]></description><link>https://www.isophist.com/p/context-lab-11-weekly-plan-skill</link><guid isPermaLink="false">https://www.isophist.com/p/context-lab-11-weekly-plan-skill</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 31 Mar 2026 12:31:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bpic!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bpic!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bpic!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!bpic!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!bpic!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!bpic!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bpic!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:663237,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/191877514?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bpic!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!bpic!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!bpic!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!bpic!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7ec29a6-eb81-4587-8327-7ef0078d0239_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://www.isophist.com/p/the-techne-behind-agent-skills">Last week&#8217;s post on agent skills</a> made the case that most AI skills underperform because they&#8217;re written as a single type of content. Task information alone. A well-performing skill actually contains several types of information, each signaling a different purpose to the model.</p><p>Below you&#8217;ll find the revision of my weekly plans skill that I use to create weekly plans for my classes that help students (and myself) know what were are doing for the week.</p><p>A few things to watch for as you read through:</p><p>The <strong>Concept</strong> block is the section most likely to be missing from skills you&#8217;ve already built. It&#8217;s also the one that does the most work before a single step gets executed.</p><p>This is where you define exactly what it is you want the AI to produce &#8230; and any other ideas that need to be defined and customized to your context.</p><p>The <strong>Principle</strong> section consolidates constraints that were scattered in the original. Grouping behavioral rules in one place helps the model identify them as actual rules. When the same rules are embedded in a procedure, its more likely that they will be conflated with tasks.</p><p>The description field in the front matter is <strong>Reference </strong>information, and it&#8217;s one of the most consequential sections in the entire file. If the skill isn&#8217;t triggering when you expect it to, that&#8217;s where to look first.</p><p>The skill itself is adapted from my <a href="https://www.isophist.com/s/prompt-ops">Writing with Machines </a>course and designed to be modified. If you build something from it, I&#8217;d genuinely like to know what you changed and why. That&#8217;s what makes this a lab!</p><p>Note: I used markdown for this &#8220;skill prompt&#8221; because that is what Claude typically uses. I&#8217;m thinking about testing this against an XML version in the near future.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p>
      <p>
          <a href="https://www.isophist.com/p/context-lab-11-weekly-plan-skill">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Techne Behind Agent Skills]]></title><description><![CDATA[Its not just about tasks]]></description><link>https://www.isophist.com/p/the-techne-behind-agent-skills</link><guid isPermaLink="false">https://www.isophist.com/p/the-techne-behind-agent-skills</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 24 Mar 2026 11:08:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!39db!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!39db!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!39db!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!39db!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!39db!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!39db!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!39db!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1749050,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/191790297?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!39db!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!39db!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!39db!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!39db!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a8577cb-e2e0-42f8-b2ff-8e8df1007050_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image generated by <a href="https://try.gamma.app/ka5vvp4ov8sj">gamma.ai</a></figcaption></figure></div><p>There&#8217;s an old philosophical grudge against craft.</p><p>It goes back to Plato, who compared rhetoric to cooking in the <em>Gorgias</em> dialogues. Both rhetoric (or speech-making) and cooking produce pleasing results, but neither understands the true principles behind what it makes. </p><p>He called this mere <em>empeiria</em>, or a knack, habit, unreflective routine. You learn what works without knowing why. For Plato, its the lowest form of knowledge and why many philosophers (and now academics) see practice as less worthy of attention.</p><p>Aristotle pushed back. </p><p>In the <em>Nicomachean Ethics</em>, he defines <em>techne</em> as &#8220;a productive state that is truly reasoned&#8221; &#8230; not just the ability to make something, but making with genuine understanding of the principles behind it. </p><p>The practitioner with empeiria knows that something works. The practitioner with techne knows <em>why</em> it works.</p><p>I&#8217;ve been thinking about this distinction a lot lately while building custom AI agent skills. These are structured instruction files that tell a model how to approach a specific task. </p><p>Most people build them the way Plato described rhetoric: run it, tweak it, run it again until the output looks right. Learn what works without ever understanding why.</p><p>That&#8217;s empeiria. And it only gets you so far.</p><p>The principled understanding (or techne) comes from a discipline that has spent decades thinking carefully about how humans organize and communicate information. </p><p>For example, technical communicators, particularly those working with structured authoring systems like DITA, have long organized content into five functional categories called information types. </p><p>These aren&#8217;t arbitrary divisions. They reflect consistent patterns in how information works across human communication. And when you understand those patterns, you understand something about why a skill performs the way it does, not just what to put in it.</p><p><a href="https://instructionmanuel.com/writing-skills-agents-can-execute">Manny Silva</a> recently made the case that agent skills are a form of documentation, held to a higher standard of precision than anything written for human readers. </p><p>He&#8217;s right. But I&#8217;d push the argument further. </p><p>The reason most skills underperform isn&#8217;t just that the steps are vague. It&#8217;s that the whole file is often written as a single type of content. </p><p>But a well-performing skill actually contains several kinds of content (not just tasks), each signaling a different purpose to the model through patterns it has been shaped to recognize.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Information types and what they signal</strong></h2><p>If you&#8217;ve been following this newsletter, you&#8217;ve seen <a href="https://www.isophist.com/p/structured-content-not-ai-will-determine?utm_source=publication-search">information types </a>come up before in how I structure AI-ready knowledge and even some prompts. For those newer to the idea, here&#8217;s the short version.</p><p>Information types are five distinct patterns of content, each with a recognizable purpose:</p><ul><li><p><strong>Reference</strong> states something the reader needs to know.</p></li><li><p><strong>Concept</strong> explains something the reader needs to understand.</p></li><li><p><strong>Principle</strong> advises what to do or not do, and when.</p></li><li><p><strong>Process</strong> illustrates how something works at the system level.</p></li><li><p><strong>Task</strong> instructs the reader on the specific steps to take.</p></li></ul><p>These categories come out of the practice of content strategy and shows how philosophy or academia can inform our practice. These information types aren&#8217;t just best practices, but actual patterns humans have developed and used consistently across centuries of written communication: in manuals, textbooks, legal codes, scientific papers, policy documents.</p><p>Any sufficiently large corpus of human-produced text is saturated with them, which is why they work with AI.</p><div class="pullquote"><p>When you write in those patterns deliberately, you&#8217;re not teaching the model something new. You&#8217;re giving it a clearer signal about what kind of content this is and what it&#8217;s for.</p></div><p>A model shaped on that corpus has encountered these patterns countless times. When you write in them deliberately, you&#8217;re not teaching the model something new. You&#8217;re giving it a clearer signal about what kind of content this is and what it&#8217;s for. </p><p>In classical rhetoric, Aristotle described <em>topoi</em> as standard categories of thought that speakers could draw upon to construct a response. </p><p>Information types work similarly &#8230;  not as cognitive locations, but as recognizable patterns that carry purpose. Organize your content by type, and you&#8217;re giving AI strong signal. Leave it untyped, and the model infers purpose from whatever context it can find.</p><p>Sometimes that inference is fine. Often it&#8217;s close but wrong in ways that are hard to diagnose. It becomes a skill that technically works but doesn&#8217;t quite perform.</p><h2><strong>The Problem with Task-Only Skills</strong></h2><p>If you don&#8217;t yet know what a skill is, it is simply a set of instructions an AI model refers to for specific task.</p><p>Sound familiar? Well, it should. Its basically a prompt.</p><p>When you ask Claude to create a document and watch it work, you&#8217;ll notice it references a skill. That skill is a markdown file (.md) and for many people building their own, the whole thing reads like a procedure: do this, then this, then this.</p><p>Task information is exactly right for execution steps. But a skill is also a description of what the output is supposed to be, a set of behavioral constraints, a map of how this task connects to the larger workflow, and the metadata that triggers the skill in the first place. </p><p>When all of that gets written as task steps (or skipped entirely) the model fills the gaps from general training. And that training may not match your context.</p><p>This is why a skill can produce technically correct output that still feels off. The steps were followed. But the patterns that signal what kind of output this is, what constraints apply, how this task fits a larger system are absent. The model filled that gap from wherever it could.</p><p>Or, as I&#8217;ve noticed, the model calls up the skill at the wrong time (or fails to call it up at the right time).</p><p>So I thought, why not information type my weekly class plan skill. This is the skill I use across projects to make sure that my weekly plans that I send students look the same and function the same way.</p><p>This saves me considerable work, while adding value to student experience.</p><p>My rewritten class plan skill produced noticeably better output on the first run. The surface result looked similar, but the output was more precise and consistent.</p><p>Here is how I organized the skill with information types.</p><p>A <strong>Concept</strong> block comes first. What a weekly class plan <em>is</em>. Not a schedule, but a student-facing document that bridges course design and classroom practice, written like a knowledgeable colleague talking to a student, not a syllabus. Without this, the model supplies its own understanding of &#8220;weekly class plan.&#8221; Sometimes that&#8217;s fine. Often it&#8217;s close but wrong in ways that are hard to diagnose.</p><p><strong>Principle</strong> blocks group behavioral constraints separately. What the model must always do, what it must never do, under what conditions it should stop and verify. Writing them in their own section, in direct second-person language, makes them clearer.</p><p>A <strong>Process</strong> block gives system awareness, or how the skill fits into a bigger context. For example, most of my weekly plans point towards a specific deliverable. I prefer to introduce an idea or skill, then spend time in class applying that in ways that move them forward on a deliverable. A model that only has the task steps produces a plan. A model that understands the course arc produces a plan that fits my pedagogy.</p><p><strong>Reference</strong> is the front matter, which is usually the name and description that trigger the skill. Claude scans that description against your request to decide whether to load the skill at all. Vague descriptions mean missed triggers. That&#8217;s not a configuration problem. It&#8217;s a writing problem.</p><p>The <strong>Task</strong> steps come last, as the final instruction before execution.</p><div class="pullquote"><p>Agent skills are just another form of prompt design. Prompt design is just another form of information design. And information design, done well, is how you actually build the context around your AI workflows, systematically with the same discipline that technical writers have been applying to human readers for decades.</p></div><h2><strong>The Techne of Designing Context</strong></h2><p>Restructuring this skill didn&#8217;t just improve one output, it provides context for every moment where I might need a weekly plan for my students.</p><p>This is why structured prompting is still relevant. Its the starting point for context engineering and system design.</p><ol><li><p>A structured prompt is how you give the model clear instructions for a single interaction. </p></li><li><p>Structured knowledge is how you build the environment the model operates in.</p></li><li><p>Context engineering is the full practice of designing that environment intentionally. Not just what you ask for, but everything the model carries into the task.</p></li></ol><p>Agent skills are just another form of prompt design. Prompt design is just another form of information design. And information design, done well, is how you actually build the context around your AI workflows, systematically with the same discipline that technical writers have been applying to human readers for decades.</p><p>This is what Aristotle meant by techne. Not the knack of someone who has run the same skill fifty times and learned what tends to work. </p><p>The reasoned understanding of someone who knows <em>why</em> it works.</p><p>This is the kind of knowledge that content professionals bring to our conversations about AI, and what makes writers more valuable then ever in the age of AI.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>In the Context Lab, I&#8217;ll be sharing the full weekly plan skill for paid subscribers. Consider supporting this work and taking a peek!</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Do Prompts Really Need Markup?]]></title><description><![CDATA[Deep Reading, Episode 8]]></description><link>https://www.isophist.com/p/do-prompts-really-need-markup</link><guid isPermaLink="false">https://www.isophist.com/p/do-prompts-really-need-markup</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 10 Mar 2026 11:02:45 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190293467/581f603fac314e62f46e7e1cd8faea91.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>If you&#8217;ve taken any course on prompt design, <a href="https://www.isophist.com/p/writing-with-machines">including mine</a>, you&#8217;ve probably been told to use markup in some way. </p><p>This might be markdown, XML, or, in my case, semantic tags.</p><p>&#10145;&#65039;<a href="https://www.isophist.com/p/the-anatomy-of-a-prompt-3a1"> See this free lesson from my Writing with Machines course to learn more.</a></p><p>These function as labels for both machines and humans that help organize your prompt into sections, for example [ROLE], [CONTEXT], [TASK]. </p><p>I&#8217;ve taught this. I still use tags in my own work when creating reusable prompts. </p><p>&#8230; And I get asked constantly whether they&#8217;re actually necessary anymore, especially now that models keep getting more capable.</p><p>I&#8217;m Lance Cummings. And welcome to my intermittent (or aspirationally biweekly) podcast that explore deep research on AI and writing.</p><p>That question got me digging into recent research on prompt structure and performance, and what I found reframes the conversation a bit. </p><p><strong>The tags aren&#8217;t really the point. Specificity is the point.</strong> The tags just help us get there. </p><p>But as we move from prompt design into what Anthropic now calls <em>context engineering</em>, tags may matter more than you think. Just not for the reasons you&#8217;d expect.</p><h2>The Real Question</h2><p>Here&#8217;s what most people mean when they ask about semantic tags. </p><p><strong>Does the AI actually perform better when I label parts of my prompts? Does [GOAL] do something that &#8220;I want to &#8230;&#8221; doesn&#8217;t?</strong></p><p>We have good research on this now. <a href="https://arxiv.org/abs/2310.11324">Sclar and colleagues at ICLR 2024</a> tested how formatting preserves meaning across 53 tasks and found that formatting alone could swing accuracy dramatically, but the best format for one model wasn&#8217;t the best for another. </p><p>Different models often prefer different structures. This is why you should test you prompts as a team &#8230; and not just go by gut.</p><p>But if you&#8217;re looking for a formatting rule that works everywhere, there isn&#8217;t one. That&#8217;s a dead end. </p><p>But the research <em>did</em> find something that works everywhere, and it&#8217;s not about format at all.</p><h2>Its All About Specificity</h2><p><a href="https://arxiv.org/abs/2602.04297">Pecher and colleagues</a> published a study in February 2025 investigating why small changes to prompts produce wildly different outputs. They traced most of it back to a single cause: <strong>prompt underspecification.</strong> </p><p>Not format. Not tags. </p><p>The prompts that produced erratic results were prompts that didn&#8217;t clearly describe the task, the constraints, or the expected output. Well-specified prompts suffered dramatically less from sensitivity, regardless of formatting choices.</p><p>Think of it like giving directions. </p><p>&#8220;Go to the store&#8221; is underspecified. You might end up at a grocery store, a hardware store, a convenience store three blocks away. </p><p>But &#8220;drive to the Harris Teeter on College Road, pick up two pounds of ground beef from the butcher counter, and use the self-checkout,&#8221; now the format barely matters. The task is embedded in the sentence structure itself and will constrain the output whether you text it, email it, or scribble it on a sticky note.</p><p>This maps directly onto the three-component model I teach: Task, Context, Content. </p><p>Those three categories were never about the brackets. They were about forcing you to answer three separate questions: </p><ul><li><p><em>What do I want the AI to do? </em></p></li><li><p><em>What does it need to know about the situation? </em></p></li><li><p><em>And what source material should it work with?</em> </p></li></ul><p>The tags were one way to organize those answers. A useful way. But the answers themselves are what drive performance.</p><h2>What About New Reasoning Models?</h2><p>Now, I should complicate this, because the landscape has shifted.</p><p><a href="https://arxiv.org/abs/2408.02442">Tam and colleagues</a> showed at EMNLP 2024 that forcing structured output formats significantly degraded reasoning. </p><p>Imagine you ask a colleague to analyze a customer support problem and give you their recommendation. </p><p>Normally, they&#8217;d read through the tickets, notice some patterns, and reason their way to a conclusion. </p><p>Now imagine instead you hand them a form &#8212; <em>fill in the &#8220;Recommendation&#8221; field first, then the &#8220;Reasoning&#8221; field</em>. </p><p>That&#8217;s essentially what happened when models were forced to produce structured output like JSON or XML. The model placed the answer before the reasoning, skipping the step where it works through the problem. </p><p>Their solution was a two-step approach: reason in natural language first, then convert to structured format.</p><p>Here&#8217;s what&#8217;s changed since then, though.</p><div class="pullquote"><p>Structure your <em>context</em>, not your commands. </p></div><p>Reasoning models now reason <em>internally</em> before generating output. Claude 4 models use what Anthropic calls &#8220;extended thinking.&#8221;  They work through the problem behind the scenes, then produce the response. The model handles that &#8220;reason first, format second&#8221; step on its own.</p><p>Does that make the finding obsolete? Not entirely. </p><p>For content professionals working with structured authoring like DITA, XML schemas, and technical documentation, the principle still holds for how you write your prompts. </p><p>You&#8217;ll get better content by describing what you want in natural language and letting the model generate the substance, rather than forcing a rigid format from the start. </p><p>The reasoning models are better at this than their predecessors, but the content still benefits from clear, natural-language instructions. </p><p><strong>Structure your </strong><em><strong>context</strong></em><strong>, not your commands.</strong></p><h2>From Prompt Engineering to Context Engineering</h2><p>And that phrase &#8212; <em>structure your context</em> &#8212; is where tags become more important, not less.</p><p>In September 2025, Anthropic published a piece on what they call &#8220;<a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents">context engineering</a>.&#8221; Building with language models is becoming less about finding the right words for your prompts and more about curating the right <em>configuration of context</em>.</p><p>This is the full set of information the model sees at any given moment, which does include your prompt, but also tools, documents, conversation history, reference material, and system instructions.</p><p>This is where my own practice has evolved. I actually use <em>more</em> XML-style tags now than I did a year ago. Not fewer. </p><p>This is for two reasons.</p><p>First, I work primarily in Claude, and Anthropic still <a href="https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/use-xml-tags">explicitly recommends XML tags</a> in their current documentation for Opus 4.6 and Sonnet 4.6. </p><p>They&#8217;re clear that there are no magic tag names &#8212; <code>&lt;instructions&gt;</code> doesn&#8217;t outperform <code>&lt;my_rules&gt;</code> &#8212; but XML as a delimiter system helps Claude parse complex prompts. That&#8217;s a model-specific advantage, not a universal rule.</p><div class="pullquote"><p>That&#8217;s really the move from prompt engineering to context engineering in practice. You&#8217;re no longer crafting a single message. You&#8217;re designing an information environment.</p></div><p>Second, most of what I&#8217;m putting into prompts these days isn&#8217;t instructions. It&#8217;s content. </p><p>Course materials, style guides, reference documents, background research. </p><p>When you&#8217;re loading a context window with thousands of tokens of source material, tags become boundaries between <em>what the AI should read</em> and <em>what it should do</em>. They&#8217;re separating content from instruction, not labeling instruction blocks.</p><p>That&#8217;s really the move from prompt engineering to context engineering in practice. You&#8217;re no longer crafting a single message. You&#8217;re designing an information environment. </p><p>And tags &#8212; whatever flavor you prefer &#8212; become the architecture of that environment.</p><h2>Takeaways for Writers and Content Professionals</h2><p>Here are three guidelines going forward.</p><ol><li><p><strong>Keep the categories, hold the brackets loosely.</strong> Task, Context, and Content remain the most research-supported way to organize what you give an AI. Whether you wrap them in XML, use markdown headers, or write clear paragraphs matters far less than whether you&#8217;ve actually specified all three. </p></li><li><p><strong>Use tags to structure your context, not just your prompts.</strong> As your AI workflows grow beyond single prompts, tags become architecture. They&#8217;re a coordination tool for humans and a parsing tool for the model. That value only increases as the information environment gets more complex.</p></li><li><p><strong>Let the model reason naturally, then apply structure.</strong> If your final output needs to follow a structured format, describe your intent in natural language first. Reasoning models handle this better than ever, but the content still benefits from natural-language instructions over rigid format constraints up front.</p></li></ol><p>The real lesson here isn&#8217;t about brackets or XML. It&#8217;s that we&#8217;ve moved past single-prompt optimization. </p><p>Context engineering means designing information environments, and the tools we use to organize those environments matter more now than they did when all we had was a chat box and a one-shot prompt.</p><p>If someone on your team is wrestling with whether tags still matter, share this episode. The answer is more interesting than a simple yes or no. </p><p>If you want to go deeper on building the kind of systematic prompt and context frameworks we talked about today, that's exactly what my course <em><a href="https://www.isophist.com/p/writing-with-machines">Writing with Machines</a></em> covers. It's designed for content professionals who want a repeatable process, not a collection of tips. </p><p>I&#8217;m Lance Cummings. Until next time &#8230; keep prompting &#8230; or engineering that context!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Paid subscribers to <em>Writing with Machines</em> get access as part of their subscription. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Context Lab #11: Writing Genre Coach]]></title><description><![CDATA[When the prompt is the easy part]]></description><link>https://www.isophist.com/p/content-lab-11-writing-genre-coach</link><guid isPermaLink="false">https://www.isophist.com/p/content-lab-11-writing-genre-coach</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Mon, 23 Feb 2026 14:26:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yZkh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yZkh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yZkh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!yZkh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!yZkh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!yZkh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yZkh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:662135,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/188899445?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yZkh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!yZkh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!yZkh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!yZkh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6ccac7e-e3b4-4c8a-bdb5-bfb0097a888c_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ve been calling this section &#8220;Prompt Lab&#8221; since I launched it, but I&#8217;m changing the name to <strong>Content Lab</strong>. </p><p>The reason is sitting right inside this post: what I&#8217;m exploring has quietly outgrown prompt mechanics. </p><p>It&#8217;s become about knowledge design, genre, communication systems &#8212; the whole environment in which a prompt lives. </p><p>The prompt is often the last thing I write now, not the first.</p><p>For content professionals and technical writers, that shift matters. We've always known that good content doesn't start with writing. It starts with architecture. Information types, audience analysis, structured authoring, content strategy. </p><p>Here is what I mean.</p><h2>The Meta Problem</h2><p>The students in my AI writing class have been struggling with something I didn&#8217;t anticipate. The assignment asks them to analyze their AI-assisted writing process and document their findings, which includes the prompts they built and what they learned from them. </p><p>Straightforward enough, or so I thought. But I ran into a consistent problem.</p><p>Students would revise their prompts. They&#8217;d revise their notes. But they wouldn&#8217;t revise how they were <em>communicating their findings to an outside audience.</em> </p><p>They were writing for themselves, not for a reader who needed to understand what they did and why it mattered.</p><p>When I dug into it, I realized I was asking them to do something genuinely difficult &#8212; not just create a prompt, but produce a professional document <em>about</em> the process of creating one. </p><p><strong>That&#8217;s a meta-cognitive task. And it requires knowing what genre you&#8217;re working in.</strong></p><p>After talking with students, I realized some had never been introduced to genre as a concept for professional writing. </p><p>English majors generally fared better, not because they&#8217;re stronger writers, but because they had a conceptual vocabulary for what I was asking. They could name the shape of the document before they built it.</p><p>You might think: &#8220;Well, why didn&#8217;t students just use AI to help them figure out the right genre?&#8221; </p><p>Because if you don&#8217;t know what genre you need (or what genre even <em>is</em> as a functional concept), you don&#8217;t know what to ask for. </p><p>Genre knowledge has to exist in your head before it can inform your collaboration with an AI. It&#8217;s not a skill you can offload.</p><p>So I built a Claude app to help.</p><h2>Building a Genre Coach Solution</h2><p>The prompt I&#8217;m sharing below is a writing coach that gives students exactly two specific revision actions to move their draft toward professional case study format. It identifies </p><ul><li><p>where description has replaced analysis, </p></li><li><p>where evidence is missing, and</p></li><li><p>where the reasoning behind AI choices is vague. </p></li></ul><p>Then it shows students a before-and-after example using their <em>own</em> words, so the feedback is immediate and concrete.</p><p>But I want to be clear, I didn&#8217;t write it in a vacuum. I drafted it inside my Claude project for this class, which contains my course materials, my genre presentation, my assignment descriptions, and my own running reflections on how the class has been going. </p><p>Claude wasn&#8217;t just following instructions. It was working from a situated context build around my course, my students, and my specific pedagogical problem.</p><p>That&#8217;s a different relationship to prompting than I had even a year ago. I&#8217;m not just writing better prompts. I&#8217;m building better environments, and the prompts emerge from those environments almost naturally.</p><div class="pullquote"><p>The work that happens before you open the chat window is where the real leverage is.</p></div><p>Your knowledge base has become as important as your prompting technique &#8212; maybe more so. A mediocre prompt inside a rich, well-structured project context will often outperform a brilliant prompt written cold. </p><p>The work that happens before you open the chat window is where the real leverage is.</p><p>This has gotten me thinking &#8230; if genre knowledge is a prerequisite for effective AI collaboration, what other conceptual frameworks are quietly limiting what we can do with these tools? </p><p>For content professionals, I'd argue the answer is already in our toolkit &#8212; information typing, structured authoring, audience modeling. </p><p>The field has been building toward this kind of AI-ready thinking for decades without knowing it. </p><p>I'd love to hear what you're seeing in your own work. Where is your content expertise giving you an edge, and where are you still running into walls?</p><h2>System Prompt: Writing Genre Coach</h2><p>I'm running this as a Claude app inside my AI writing course project, which means it already has context about my assignments, my students, and what a finished case study should look like. </p><p>Students paste in their draft and get two specific, actionable revision notes back &#8212; no vague feedback, no overwhelming list of changes. </p><p>If you're teaching writing or working with teams who need to document processes and decisions, you could adapt this easily.</p><p>Just swap out the case study definition for whatever professional genre your context requires, and feed it any relevant background materials you have. The tighter your project context, the more targeted the feedback gets.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>The full prompt is below for paid subscribers. And if you're a paid subscriber, you also have beta access to my online course, <a href="http://isophist.com/p/writing-with-machines">Writing with AI</a> &#8212; where this kind of structured, genre-aware prompting is exactly what we're building toward.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>
      <p>
          <a href="https://www.isophist.com/p/content-lab-11-writing-genre-coach">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Don't Just Prompt Engineer. Start Building Taxonomies.]]></title><description><![CDATA[How I'm using taxonomies and machine rhetorics to build AI content operations in the classroom]]></description><link>https://www.isophist.com/p/stop-prompt-engineering-start-building</link><guid isPermaLink="false">https://www.isophist.com/p/stop-prompt-engineering-start-building</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Fri, 13 Feb 2026 14:23:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rdLU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rdLU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rdLU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 424w, https://substackcdn.com/image/fetch/$s_!rdLU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 848w, https://substackcdn.com/image/fetch/$s_!rdLU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 1272w, https://substackcdn.com/image/fetch/$s_!rdLU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rdLU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png" width="1920" height="1088" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1088,&quot;width&quot;:1920,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2892596,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/187407609?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9716144-d389-4031-9ca4-aba82b9d52f8_1920x1088.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rdLU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 424w, https://substackcdn.com/image/fetch/$s_!rdLU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 848w, https://substackcdn.com/image/fetch/$s_!rdLU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 1272w, https://substackcdn.com/image/fetch/$s_!rdLU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03496096-576d-4b00-8702-623b382ccc5b_1920x1088.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image generated by <a href="https://try.gamma.app/ka5vvp4ov8sj">Gamma.ai</a></figcaption></figure></div><p>Here&#8217;s a test case that reveals why casual AI use too often fails: disaster communication.</p><p>Imagine building a chatbot to help communities understand food scarcity risks during emergencies. You feed it information about food security, emergency preparedness, community resources.</p><p>People ask questions. AI generates responses.</p><p>When you test it with real life scenarios, it might seem to work well enough &#8230; but then breaks down under scrutiny:</p><p>&#8220;Should I eat meat that&#8217;s been in a refrigerator for one day without power?&#8221;</p><p>&#8220;How do I know if my community is at high risk?&#8221;</p><p>&#8220;What should I do if I can&#8217;t afford to prepare?&#8221;</p><p>The AI may generate perfectly grammatical sentences and sound authoritative. It might even cite statistics. But the responses can be generic, often irrelevant, and occasionally dangerous&#8212;exactly what you can&#8217;t afford in crisis communication.</p><p>This isn&#8217;t a hypothetical problem. I&#8217;m working on this with my students this semester. But the reason it matters for content professionals and writers generally has nothing to do with disaster communication specifically.</p><p>It matters because when AI absolutely has to be reliable, the same fundamental limitations appear every time. And too many people (and maybe organizations) are skipping the systematic work needed to address them.</p><p>Combining a machine rhetorics framework with user research is key to improving AI performance in <strong>real life situations</strong>.</p><div class="pullquote"><p>A <strong>machine rhetorics framework</strong> becomes more valuable, not less, as AI capabilities improve. Because the stakes get higher when people trust AI more.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><h3> &#8220;Reasoning Models&#8221; Still Don&#8217;t Reason</h3><p>AI doesn&#8217;t reason. It pattern-matches.</p><p>You might be thinking: &#8220;But what about the new reasoning models? Aren&#8217;t they moving beyond pattern-matching?&#8221;</p><p>It&#8217;s true that newer models show impressive capabilities that can break down complex problems, check their own work, and produce more consistent outputs. Some people argue this means we&#8217;re past the need for more systematic approaches to AI collaboration.</p><p>I&#8217;m skeptical. </p><p>Even with extended reasoning capabilities, these systems still operate on statistical patterns, not genuine causal understanding. </p><p>They&#8217;re better at <em>appearing</em> to reason, which makes their failures less obvious but not less consequential.</p><p>Even if reasoning models reduce some failure rates, they don&#8217;t eliminate the need for structured approaches when accuracy matters. </p><p>A more sophisticated pattern-matcher still benefits from <strong>well-structured prompts, organized content, and explicit knowledge mapping</strong> &#8230; just like a more powerful search engine still needs well-structured information architecture.</p><p>The <strong>machine rhetorics framework</strong> becomes more valuable, not less, as AI capabilities improve. Because the stakes get higher when people trust AI more.</p><p><strong>AI doesn&#8217;t reason. It pattern-matches.</strong></p><p>When you ask ChatGPT about food scarcity during disasters, it&#8217;s recognizing patterns from millions of texts where those words appeared together. It&#8217;s predicting what words typically follow other words in similar contexts.</p><p>What it&#8217;s not doing?</p><ul><li><p>Understanding cause-and-effect relationships</p></li><li><p>Making logical connections between concepts</p></li><li><p>Recognizing when general advice doesn&#8217;t apply to specific situations</p></li><li><p>Knowing why certain information matters more than other information in emergency contexts.</p></li></ul><p>This is why AI often produces responses that sound right but fall apart under scrutiny. The patterns are there. The reasoning isn&#8217;t.</p><p>From a practical standpoint, you can&#8217;t just &#8220;talk to AI&#8221; and expect sophisticated results, any more than you could walk into a library and expect books to organize themselves for your research project. </p><p>AI needs architecture. It needs structure. It needs humans to provide the logical scaffolding that helps it move from pattern recognition toward contextually appropriate responses.</p><p>This is where most casual AI use breaks down, and it&#8217;s why content professionals and other writers have an advantage most people don&#8217;t realize yet.</p><h2>The Three-Part Framework</h2><p>The disaster communication problem reveals three distinct challenges that too many people treat as one problem. Understanding machine rhetorics helps break down these problems and build solutions that are human-based.</p><ol><li><p>AI doesn&#8217;t know what you actually need from a response (versus what it can plausibly generate).</p></li><li><p>AI can&#8217;t distinguish what&#8217;s relevant when everything&#8217;s mixed together.</p></li><li><p>AI can&#8217;t reason about logical relationships&#8212;it can only recognize patterns.</p></li></ol><p>Each limitation requires a different solution. Each solution builds on the previous one. And you can&#8217;t skip steps.</p><p>Machine rhetorics means applying rhetorical principles to AI systems the same way content professionals apply them to any information experience: </p><ul><li><p>start with users, </p></li><li><p>understand their needs through research, then </p></li><li><p>structure systems around those insights. </p></li></ul><p>Understanding machine rhetorics helps break down these problems and build human-centered solutions through strategic prompt design, structured content development. and rhetorical knowledge mapping.</p><p>This is what I&#8217;m exploring with students this spring, but the framework applies wherever content professionals need AI to be genuinely reliable. </p><p>Let me show you what each component does using disaster communication as the test case.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><h3>1. Strategic Prompt Frameworks</h3><p>The instinct is to build a chatbot by dumping information into it and hoping for the best. </p><div class="pullquote"><p>Designing information experiences is about whether information is findable, understandable, and actionable for the right people, in the right place, at the right time.</p></div><p>Someone asks, &#8220;What should I eat if my power&#8217;s been out for a day?&#8221; and the AI generates a response pulled from its training data.</p><p>The problem? That response might be generically accurate but miss everything that matters about the person asking.</p><p>This is a usability problem and needs usability frameworks to solve. When content professionals design information experiences, its more than just getting users to click the right buttons. </p><p>Designing information experiences is about whether information is findable, understandable, and actionable for the right people, in the right place, at the right time. </p><p>Ultimately, an AI chatbot is an information interface, and it can fail for the same reasons any interface fails. It wasn&#8217;t designed around how real users actually encounter and act on information.</p><p>That&#8217;s why we started with user research in our disaster communication project. Last semester, our students conducted interviews, usability tests, and card-sorting exercises with the target audience for a disaster food security chatbot. </p><p>This semester, a new group of students is using those findings to develop test user questions and structured content for the AI system. The usability insights from that research are what will shape how we build the prompts.</p><p>Generic prompts produce generic answers. But when you structure the system around what usability research has revealed about real users, the AI has something meaningful to work with.</p><p>Here&#8217;s the difference. A basic system instruction might say:</p><p><em>You are a helpful assistant that answers questions about food safety during disasters.</em></p><p>A system prompt grounded in usability research looks more like this:</p><p><strong>Task:</strong> Answer questions about food safety during power outages, prioritizing actionable steps the user can take right now with what they have available.</p><p><strong>Here is some context from research that might be added to the system prompt:</strong></p><ul><li><p><strong>Users are often college students with limited storage, limited transportation, and tight budgets.</strong> They need to know what to do with the food they already have, not what they should have bought in advance.</p></li><li><p><strong>Users skip overly official language. </strong>Usability testing showed they abandoned documents that read like government pamphlets. They responded to conversational, direct guidance.</p></li><li><p>A common misconception from card-sorting data: <strong>users grouped &#8220;frozen food&#8221; and &#8220;refrigerated food&#8221; together, not realizing they follow different safety timelines during outages.</strong> Responses should proactively clarify this distinction when relevant.</p></li></ul><p>Now when someone asks &#8220;Is the chicken in my fridge still safe?&#8221; the AI can respond with specific guidance calibrated to the user&#8217;s actual situation &#8212; not a generic food safety lecture. </p><p>And the reason it can do that isn&#8217;t because someone wrote a clever prompt. It&#8217;s because usability research identified what users need, how they process information, and where their mental models diverge from expert knowledge.</p><p>The difference isn&#8217;t prompt length. It&#8217;s that the prompt is structured around a usability framework content professionals already work with: </p><ul><li><p>audience analysis drawn from real research, </p></li><li><p>purpose defined by actual user needs, and </p></li><li><p>content organized to match how people actually encounter problems rather than how experts categorize them.</p></li></ul><p>This same rhetorical approach needs to be applied when building the information and content AI draws on in its responses.</p><h3>2. Structured Content Development</h3><p>Even well-structured prompts fall short when you feed AI disorganized information or rely on its black boxed training.</p><p>You might have:</p><ul><li><p>Research about food scarcity</p></li><li><p>Statistics about emergency food systems, and</p></li><li><p>Interview data from community organizations. </p></li></ul><p>But if it&#8217;s all jumbled together, AI can&#8217;t tell what matters when, or how different pieces of information relate to each other, or what readers need to know before they can understand something else.</p><p>The solution is organizing materials using what content professionals call information types:</p><p><strong>Reference information</strong>: Basic facts readers need to understand the situation. What counts as food insecurity? What do terms like &#8220;food desert&#8221; actually mean? What are current statistics?</p><p><strong>Concept information</strong>: Frameworks for understanding the problem. How do sociologists think about food systems? What models help explain why communities experience food scarcity? How do emergency management professionals analyze risk?</p><p><strong>Principle information</strong>: Cause-and-effect relationships that explain how things work. Why do food supply chains fail during disasters? What factors determine community resilience? How does economic stress affect food access?</p><p><strong>Process information</strong>: How things unfold over time. What&#8217;s the typical progression of food scarcity during emergencies? How do communities respond? What happens when intervention comes too late?</p><p><strong>Task information</strong>: Specific actions people can take. How should individuals prepare? What should community organizations do first? When should people seek additional resources?</p><p>When you organize information this way, two things happen. </p><p>First, you understand your own materials better. You can see gaps in your knowledge and recognize which types of information you&#8217;re missing entirely.</p><p>Second, when you structure this organized content into AI interactions, responses become more targeted, more appropriate to specific situations, and actually useful for different kinds of questions.</p><p>This is the second component: organizing information systematically so both humans and AI can understand how different pieces relate to each other and serve different purposes.</p><p>That&#8217;s what my students will be working on soon.</p><p><em>You can get a sneak peak at the taxonomy we are using in my Content Lab</em></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;771d0db6-60f6-4ce2-bc32-3a5b259e8da6&quot;,&quot;caption&quot;:&quot;This taxonomy emerged from Fall 2025 user research conducted by students in our disaster communication project, and illustrates how I&#8217;m using taxonomies to organize and structure AI collaboration. Working with the New Hanover Disaster Coalition, students interviewed UNCW students about food security needs during disasters, conducted usability testing on&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;User-Backed Taxonomy Handout&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:129389476,&quot;name&quot;:&quot;Lance Cummings&quot;,&quot;bio&quot;:&quot;AI Content Specialist &amp; Professor | Exploring how to leverage structured content with rhetorical strategies to improve the performance of generative AI technologies&nbsp;both in the workplace and the classroom.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd589e8cc-4070-4e52-a3e0-82f218982383_3751x5626.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-13T13:30:54.268Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!V5tn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.isophist.com/p/user-backed-taxonomy-handout&quot;,&quot;section_name&quot;:&quot;Content Lab&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:187693900,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:1639524,&quot;publication_name&quot;:&quot;Cyborgs Writing&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!cnci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd41b2ae-512f-4bbc-8ca0-1dc31a7a8641_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3>3. Rhetorical Knowledge Mapping</h3><p>Strategic prompts and structured content get you far, but they still leave a gap: AI can&#8217;t reason about relationships between concepts unless you map them explicitly.</p><p>When someone asks &#8220;Is the chicken in my fridge still safe?&#8221;, the answer depends on understanding how factors connect: How long has the power been out? Was the chicken frozen or refrigerated? Has the fridge stayed closed? What&#8217;s the safe timeline for this food type?</p><p>These aren&#8217;t isolated facts. They&#8217;re part of a reasoning structure where one decision point leads to another. AI can&#8217;t extract these logical relationships reliably from unstructured text alone.</p><p>This is where knowledge graphs come in&#8212;not as a technical database concept, but as a systematic way to map the reasoning structure that underlies domain expertise.</p><p>Our <a href="https://www.isophist.com/p/user-backed-taxonomy-handout">project taxonomy (</a>developed from Fall 2025 user research) organizes disaster food security content into domains and topics: Food &amp; Hydration, Water Safety, Power &amp; Utilities, each with specific subtopics like &#8220;Refrigerator/freezer safety during outages.&#8221;</p><p>A taxonomy tells you <em>what</em> exists in the domain. A knowledge graph tells you <em>how those things relate</em>.</p><p>You identify the discrete concepts (refrigerated food, power outage, time thresholds, student constraints), then map the relationships between them: what&#8217;s safe for how long under which conditions, what constraints affect which options, what triggers what timeline.</p><p>The graph also captures context from user research, for example students confuse frozen and refrigerated timelines, trust action-oriented guidance, and face transportation and storage constraints. These rhetorical insights shape how relationships get structured.</p><p>With this mapped, AI can trace a path from the user question &#8594; refrigerated food entity &#8594; time without power &#8594; safety threshold &#8594; a more specific response about food.</p><p>Building a knowledge graph is about choosing which distinctions matter for this audience, which relationships drive decisions, which concepts need disambiguation, and which context shapes interpretation.</p><p>It&#8217;s rhetorical.</p><p>Content professionals already make these decisions when organizing information. The knowledge graph makes those decisions explicit and machine-readable so AI systems can reason with them.</p><h2>How They Build On Each Other</h2><p>You can&#8217;t skip steps in this progression, and disaster communication shows why with unusual clarity.</p><p><strong>Without strategic prompt frameworks</strong>, you&#8217;re hoping AI will guess what constitutes appropriate crisis communication. Sometimes it guesses right. Maybe even most of the time. But what if it doesn&#8217;t? The consequences matter.</p><p><strong>Without structured content</strong>, even well-framed prompts produce generic responses because AI can&#8217;t distinguish what&#8217;s relevant when facts, theories, procedures, and definitions are jumbled together. In crisis communication, generic responses can cause harm.</p><p><strong>Without knowledge maps</strong>, AI can&#8217;t understand logical relationships, cause-and-effect connections, or contextual constraints that determine when information applies versus when it doesn&#8217;t. It will confidently generate plausible-sounding advice that violates the actual constraints governing emergency response.</p><p>But when you address all three challenges systematically, something shifts. You&#8217;re not just &#8220;using AI.&#8221; You&#8217;re designing information systems that help AI perform better while developing your own analytical capabilities.</p><p>The disaster communication scenario makes failures visible because stakes are high. But the same three limitations show up everywhere content professionals need AI to be reliable: </p><ul><li><p>technical documentation where incorrect instructions cause problems,</p></li><li><p> compliance content where errors create liability, and</p></li><li><p>strategic communication where off-brand messaging damages reputation.</p></li></ul><p>The framework is the same. Only the domain changes.</p><div class="pullquote"><p>Can you design systematic approaches to AI collaboration that leverage your rhetorical expertise while developing capabilities AI doesn&#8217;t possess? That&#8217;s the real question.</p></div><h2>What This Means for Content Professionals</h2><p>True AI collaboration develops systematic approaches to these three distinct challenges&#8212;strategic framing, content organization, and knowledge architecture.</p><p>That&#8217;s why machine rhetorics is more than just prompt engineering.</p><p>It is a way of thinking that focuses on audience and context beyond the interface or immediate chat. Its a way of thinking that most content professionals already do.</p><p>This is why <a href="https://www.isophist.com/p/why-many-writers-cant-map-their-workflows?r=2519k4">the workflow mapping advantage</a> I wrote about last week matters. You can see how your work actually gets done. </p><p>Can you design systematic approaches to AI collaboration that leverage your rhetorical expertise while developing capabilities AI doesn&#8217;t possess? That&#8217;s the real question.</p><div><hr></div><p><em>The <a href="https://www.isophist.com/p/writing-with-machines?r=2519k4">Writing with Machines</a> course teaches this framework&#8212;not just the three components, but how to apply them across different professional contexts where AI needs to be reliable rather than just plausible.</em></p><p><em>I&#8217;m testing it this spring in one of the most demanding contexts possible: crisis communication where generic responses can cause real harm. But the framework works anywhere content professionals need systematic AI integration&#8212;documentation, strategic communication, research synthesis, content operations.</em></p><p><em><strong>If you&#8217;re interested in learning to apply this in your work, the <a href="https://www.isophist.com/p/writing-with-machines">beta course</a> is opening up next week for paid subscribers!</strong></em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[User-Backed Taxonomy Handout]]></title><description><![CDATA[An example of how I'm using taxonomies to organize AI collaboration]]></description><link>https://www.isophist.com/p/user-backed-taxonomy-handout</link><guid isPermaLink="false">https://www.isophist.com/p/user-backed-taxonomy-handout</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Fri, 13 Feb 2026 13:30:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!V5tn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V5tn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V5tn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!V5tn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!V5tn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!V5tn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V5tn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:664531,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/187693900?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V5tn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!V5tn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!V5tn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!V5tn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F112aee67-5b59-416a-a5c1-b7fd8e959257_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This taxonomy emerged from Fall 2025 user research conducted by students in our disaster communication project, and illustrates how I&#8217;m using taxonomies to organize and structure AI collaboration. Working with the New Hanover Disaster Coalition, students interviewed UNCW students about food security needs during disasters, conducted usability testing on existing emergency documents, and ran card-sorting exercises to understand how people naturally categorize disaster preparation information. </em></p><p><em>The taxonomy you see below builds directly on student work but has been adapted and refined using an MCP data modeling tool and AI to demonstrate how research-based categorization translates into structured content organization&#8212;which then becomes the foundation for knowledge graphs that AI systems can use reliably.</em></p><p><strong>Interested in exploring this in your own work? Check out my course, <a href="https://www.isophist.com/p/writing-with-machines?r=2519k4">Writing with Machines</a>. Now available for paid subscribers.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><p>This taxonomy synthesizes work from Fall 2025, when students in ENG 404 (Advanced Professional Writing) and CSC 302 (Intro to AI) collaborated on the foundation for a disaster relief chatbot. ENG students conducted user research with UNCW students about food security during disasters, created taxonomies based on that research, and prepared source documents. CSC students built initial knowledge graphs using Neo4j tools.</p><p>This semester, ENG 326 and CSC 322 continue that work by developing functional chatbots. This taxonomy organizes the project so teams can divide the work systematically.</p><h2><strong>What is a taxonomy and why does it matter?</strong></h2><p>A taxonomy is a classification system that organizes information into categories and subcategories. You encounter taxonomies constantly&#8212;the folder structure on your computer, the way a grocery store organizes aisles, the categories on a news website.</p><p>For AI systems, taxonomies matter because they determine how information gets structured, stored, and retrieved. A chatbot answering questions about disaster preparedness needs to &#8220;know&#8221; that generator safety relates to power outages, which relates to food spoilage, which relates to health risks. Without an organizing structure, the chatbot has no map for navigating these connections.</p><p>The taxonomy below divides disaster preparedness into domains and specific topics. Each team will own one topic, becoming the experts responsible for gathering authoritative information, structuring it for AI use, and building a chatbot that handles questions in that area.</p><h3><strong>Taxonomy: Broad Disaster Preparedness Coverage</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qYdp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qYdp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 424w, https://substackcdn.com/image/fetch/$s_!qYdp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 848w, https://substackcdn.com/image/fetch/$s_!qYdp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 1272w, https://substackcdn.com/image/fetch/$s_!qYdp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qYdp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png" width="1456" height="1287" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1287,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:429270,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/187693900?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qYdp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 424w, https://substackcdn.com/image/fetch/$s_!qYdp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 848w, https://substackcdn.com/image/fetch/$s_!qYdp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 1272w, https://substackcdn.com/image/fetch/$s_!qYdp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd937ff0d-4d13-468e-977b-3e84058075e8_2048x1810.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>How teams will use this taxonomy</strong></h2><p>Each team receives one topic from the taxonomy. Over the semester, teams will:</p><p><strong>Identify and collect source material.</strong> Find authoritative sources (CDC, FEMA, Red Cross, etc.) that address your topic. The Fall 2025 classes prepared some text files already&#8212;check there first, then supplement as needed.</p><p><strong>Structure the content.</strong> ENG students will extract key information and organize it into a consistent format&#8212;concepts, tasks, rules, warnings&#8212;that works well for AI retrieval. This structured content feeds both your team&#8217;s chatbot and a shared repository.</p><p><strong>Build and test chatbots.</strong> CSC students will build two versions: one without a knowledge graph (baseline) and one with a knowledge graph built from the structured content. Teams will compare performance to see how structured knowledge affects accuracy.</p><p><strong>Contribute to the larger project.</strong> Your structured content joins a shared repository. Even though each team builds a focused chatbot, the combined work creates a foundation for a more comprehensive system&#8212;whether integrated this semester or built upon in future classes.</p><h3><strong>Authoritative sources by domain</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tpr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tpr-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 424w, https://substackcdn.com/image/fetch/$s_!tpr-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 848w, https://substackcdn.com/image/fetch/$s_!tpr-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 1272w, https://substackcdn.com/image/fetch/$s_!tpr-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tpr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png" width="948" height="774" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:774,&quot;width&quot;:948,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:81065,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/187693900?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tpr-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 424w, https://substackcdn.com/image/fetch/$s_!tpr-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 848w, https://substackcdn.com/image/fetch/$s_!tpr-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 1272w, https://substackcdn.com/image/fetch/$s_!tpr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f654785-0632-4b30-9931-eec9b1bca42f_948x774.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[Don't Miss Your Beta Access]]></title><description><![CDATA[Writing with Machines Course]]></description><link>https://www.isophist.com/p/dont-miss-your-beta-access</link><guid isPermaLink="false">https://www.isophist.com/p/dont-miss-your-beta-access</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 03 Feb 2026 13:23:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!baUF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!baUF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!baUF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 424w, https://substackcdn.com/image/fetch/$s_!baUF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 848w, https://substackcdn.com/image/fetch/$s_!baUF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 1272w, https://substackcdn.com/image/fetch/$s_!baUF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!baUF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png" width="715" height="402.58436944937836" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:317,&quot;width&quot;:563,&quot;resizeWidth&quot;:715,&quot;bytes&quot;:121995,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/186639809?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!baUF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 424w, https://substackcdn.com/image/fetch/$s_!baUF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 848w, https://substackcdn.com/image/fetch/$s_!baUF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 1272w, https://substackcdn.com/image/fetch/$s_!baUF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad2a398-a72e-4b4a-84ff-da036cd03b9a_563x317.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You&#8217;re getting this because you&#8217;re a paid subscriber, and I want to make sure you didn&#8217;t miss this.</p><p>I just released a <a href="https://www.isophist.com/p/why-many-writers-cant-map-their-workflows">Deep Reading episode </a>on the transparent technology myth which explains why most people can&#8217;t see their own workflows, and why that matters for AI integration. </p><p>If you haven&#8217;t listened yet, it&#8217;s the foundation for what I&#8217;ve building.</p><p><strong><a href="https://www.isophist.com/p/writing-with-machines">Writing&#8230;</a></strong></p>
      <p>
          <a href="https://www.isophist.com/p/dont-miss-your-beta-access">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Why Many Writers Can't Map Their Workflows (and why that matters for AI)]]></title><description><![CDATA[Deep Research, Episode 7]]></description><link>https://www.isophist.com/p/why-many-writers-cant-map-their-workflows</link><guid isPermaLink="false">https://www.isophist.com/p/why-many-writers-cant-map-their-workflows</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Fri, 30 Jan 2026 15:02:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/186083736/13fd9b51eeede0410e3b33b05db2e19f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Here&#8217;s something that doesn&#8217;t make sense.</p><p>You work with engineers every day. Smart people. They build complex systems. They understand workflows, pipelines, dependencies.</p><p>But ask them to map how they actually write documentation, and most of them can&#8217;t do it.</p><p>They&#8217;ll describe the end result. They&#8217;ll tell you what the doc should contain.</p><p>But the actual workflow? How information moves from subject matter expert interview to final publication? The tools, the handoffs, the format conversions, where things get stuck?</p><p>Blank stare.</p><p>Meanwhile, you can map that workflow in your sleep.</p><p>You know exactly where the bottleneck is&#8212;usually in review cycles, right? Or maybe it&#8217;s getting engineers to actually respond to edit queries. Or it&#8217;s that one legacy system that doesn&#8217;t integrate with anything.</p><p>You see the system because your job depends on seeing the system.</p><p>But here&#8217;s what I realized recently: Most people were literally taught NOT to see what you see.</p><p>And that&#8217;s not a small thing. That&#8217;s the difference between people who are prepared for AI integration and people who are scrambling.</p><p>I&#8217;m Lance Cummings, and you&#8217;re listening to Deep Reading, where we look at research that changes how we think about writing and AI.</p><p>Today I want to show you a piece of research from 1996 that explains why you&#8217;re positioned to lead AI integration in your organization&#8212;even though nobody&#8217;s probably told you that yet.</p><p>And why the engineers who are supposed to be &#8220;tech people&#8221; are actually starting from behind.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><h2>The Invisible Tool Myth</h2><p>In 1996, a researcher named Christina Haas published a study called <a href="https://www.taylorfrancis.com/books/edit/10.4324/9780203811238/writing-technology-christina-haas">&#8220;Writing Technology: Studies on the Materiality of Literacy.&#8221;</a></p><p>She was investigating something she called the transparent technology myth.</p><p>Here&#8217;s what she found: Most people are taught to treat writing tools as neutral instruments that don&#8217;t change the writing itself.</p><p>The tool is just a conduit. The thinking happens inside your head, and the tool just captures it.</p><p>This is how most of us were taught. &#8220;It doesn&#8217;t matter if you write by hand or on a computer&#8212;the writing is the same.&#8221;</p><p>Except that&#8217;s not true at all.</p><p>Haas showed that tools fundamentally shape not just the final product, but the process of composing itself.</p><p>Writing in Microsoft Word feels different than writing in MadCap Flare. Not because one is better, but because they organize information differently, which changes how you think about structure.</p><p>Managing documentation in Confluence creates different workflows than managing it in SharePoint.</p><p>Authoring in DITA with Oxygen XML forces you to think modularly in ways that a traditional word processor doesn&#8217;t.</p><p>The tool isn&#8217;t transparent. It&#8217;s active. It shapes the work.</p><p>But if you&#8217;ve been taught the transparent technology myth&#8212;that tools don&#8217;t matter, only ideas matter&#8212;then you literally can&#8217;t see how tools shape your process.</p><h2>In the Classroom</h2><p>I saw this play out exactly as the research predicts in a classroom last week.</p><p>I asked twenty students&#8212;computer engineers, English majors, cybersecurity students&#8212;to map their writing workflows.</p><p>The engineers were the most striking. These are people who live in version control systems. They can map a deployment pipeline in their sleep.</p><p>But their writing workflow? &#8220;I just... write it until it&#8217;s done.&#8221;</p><p>The English majors could describe intellectual moves&#8212;brainstorming, researching, drafting&#8212;because that&#8217;s what they&#8217;d been taught to name.</p><p>But the operational reality? The actual tools, formats, handoffs, file management, version control? That was supposed to be background noise. Irrelevant to &#8220;real&#8221; writing.</p><p>They couldn&#8217;t map their workflows because they&#8217;d been taught not to see the tools.</p><p>And this matters now because you can&#8217;t integrate AI into workflows you can&#8217;t see.</p><h2>From Process to Workflow</h2><p>In 2020, two researchers named <a href="https://doi.org/10.3998/mpub.11657120">Tim Lockridge and Derek Van Ittersum </a>published a framework specifically about writing workflows.</p><p>They defined a workflow as &#8220;the tools and the process used for a writing task.&#8221;</p><p>Not just the cognitive process&#8212;brainstorm, draft, revise.</p><p>But the tool sequences. How work actually flows through systems.</p><p>They argued that you can&#8217;t understand contemporary writing without examining these tool sequences rather than treating technology as transparent.</p><p>Then in 2024, a researcher named <a href="https://doi.org/10.1016/j.compcom.2024.102826">Alan Knowles </a>extended this to AI specifically.</p><p>He merged workflow thinking with something called Human-in-the-Loop principles.</p><p>The question isn&#8217;t &#8220;what can AI do?&#8221;</p><p>The question is &#8220;where does AI fit within existing work practices?&#8221;</p><p>Does it reduce friction? Does it open up new relationships with tools and tasks?</p><p>You can only answer that if you can see the workflow first.</p><p>Here&#8217;s where it gets interesting for content professionals.</p><p>In 2025, three researchers&#8212;<a href="https://doi.org/10.1177/00472816251332208">Getto, Kelley, and Vance</a>&#8212;applied this specifically to technical communication.</p><p>They pointed out something crucial: Technical communicators don&#8217;t operate under the transparent technology myth.</p><p>They never could.</p><p>Because technical communicators routinely attend to how tools shape content.</p><p>Style guide enforcement software. Content management systems. Structured authoring environments. XML editors. Publication pipelines.</p><p>Technical editors and content professionals have always had to think about where human judgment enters a production sequence and where automation can handle routine operations.</p><p>That&#8217;s the job.</p><p>You can&#8217;t manage content through production systems while pretending tools are transparent.</p><p>So when AI shows up, technical communicators are already prepared.</p><p>They already ask: Which tasks, at which stages, under what oversight conditions?</p><p>That&#8217;s exactly what Human-in-the-Loop AI collaboration requires.</p><h2>Understanding AI Workflows</h2><p>When most people try to integrate AI, they treat it like the transparent technology myth: just another neutral tool that captures thinking.</p><p>&#8220;Help me write this.&#8221;</p><p>But AI isn&#8217;t transparent. It&#8217;s deeply shaped by how you structure information, how you sequence prompts, what format you give it, what stage of the workflow it enters.</p><p>Getto, Kelley, and Vance describe the Human-in-the-Loop role as &#8220;manager of the process, validating outputs for whatever criteria they are aiming for.&#8221;</p><p>That&#8217;s familiar work if you&#8217;re a technical editor.</p><p>You already define tasks. Evaluate outputs against rhetorical criteria. Iterate based on results.</p><p>You already manage content through production systems.</p><p>AI is just applying the same analytical framework you&#8217;ve been using for structured authoring, content management, and publication workflows.</p><p>The question isn&#8217;t whether AI changes writing. Of course it does&#8212;tools always do.</p><p>The question is: Can you see well enough to manage that change strategically?</p><p>Here&#8217;s what this research tells us:</p><p>Most writers can&#8217;t map their own workflows because they were taught not to see tools as shaping work.</p><p>But content professionals&#8212;especially technical communicators and editors&#8212;were never trained that way.</p><p>You&#8217;ve always had to see the systems.</p><p>You understand that tools aren&#8217;t neutral. They shape how information flows, where friction happens, what&#8217;s easy and what&#8217;s hard.</p><p>You know where human judgment matters and where automation helps because you&#8217;ve been making those decisions for style guides, CMSs, and structured content for years.</p><p>That&#8217;s not a nice-to-have skill anymore. It&#8217;s the essential skill for AI integration.</p><p>Because you can&#8217;t integrate AI into workflows you can&#8217;t see.</p><p>And you can already see them.</p><h2>The Value of Operational Thinking</h2><p>The panic narrative says AI replaces writers.</p><p>But the research suggests something different: AI integration requires exactly the operational thinking that content professionals have been developing all along.</p><p>You&#8217;re not behind. You&#8217;re prepared.</p><p>You just might not have recognized workflow thinking as the strategic advantage it actually is. Because now that you can see the workflow, we need to understand what AI can actually do within it.</p><p>I&#8217;m Lance Cummings. Thanks for listening to Deep Reading.</p><p>If this changed how you think about AI and writing, share it with someone who needs to hear it.</p><p>And if you want the practical framework for mapping your workflows and integrating AI systematically, check out my newsletter Cyborgs Writing at <a href="http://www.isophist.com">http://www.isophist.com</a>.</p><p>I&#8217;m also releasing a beta version of my <em>Writing with Machines</em> course soon for paid subscribers. For more information, click <a href="https://www.isophist.com/p/writing-with-machines">here</a>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cyborgs Writing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Skill That Makes Writers More Valuable in the AI Age]]></title><description><![CDATA[Its probably not what you think.]]></description><link>https://www.isophist.com/p/the-skill-that-makes-writers-more</link><guid isPermaLink="false">https://www.isophist.com/p/the-skill-that-makes-writers-more</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 20 Jan 2026 15:05:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!DJl2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DJl2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DJl2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 424w, https://substackcdn.com/image/fetch/$s_!DJl2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 848w, https://substackcdn.com/image/fetch/$s_!DJl2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 1272w, https://substackcdn.com/image/fetch/$s_!DJl2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DJl2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png" width="1920" height="1088" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1088,&quot;width&quot;:1920,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3128038,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/184986611?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ccf99fa-85f7-41cb-a53a-6e8fd7873c02_1920x1088.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DJl2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 424w, https://substackcdn.com/image/fetch/$s_!DJl2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 848w, https://substackcdn.com/image/fetch/$s_!DJl2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 1272w, https://substackcdn.com/image/fetch/$s_!DJl2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97da49bd-8b6a-44b1-86ff-60bb2a7d05bb_1920x1088.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image created by <a href="https://try.gamma.app/ka5vvp4ov8sj">Gamma AI</a></figcaption></figure></div><p>Oftentimes, people are surprised that I teach writing &#8230; and AI. Isn&#8217;t that some kind of contradiction?</p><p>I often get this question: &#8220;What&#8217;s left for writers when AI can generate adequate text?&#8221;</p><p>The question itself reveals a misunderstanding about what makes writers valuable.</p><p>Last week I ran a workflow mapping exercise with twenty students from different disciplines&#8212;computer engineers, English majors, cybersecurity students, elementary education majors. I wanted them to map their actual writing workflows before we started integrating AI.</p><p>What happened revealed something important about the difference between describing a process and understanding a workflow.</p><p>The computer engineers struggled. They could explain their coding workflow in extraordinary detail. But their writing workflow?</p><p>&#8220;I just... do it until it&#8217;s done.&#8221;</p><p>The English majors did better initially. Brainstorming with clustering diagrams. Research with annotated bibliographies. Thesis development. Zero drafts for idea generation. Revision for argument structure.</p><div class="pullquote"><p>In a world where AI supposedly makes writers obsolete, the ability to map operational workflows might be writers&#8217; most valuable professional skill. Not just describing what you do, but understanding the entire system through which work flows.</p></div><p>But it became clear as we dug deeper that they were describing an idealized <em>process</em> or the intellectual steps they&#8217;d been taught. When I asked about the operational reality, they struggled too.</p><p>&#8220;I write in Google Docs, I guess?&#8221;</p><p>&#8220;My notes are... everywhere. Notebook, phone, scattered documents.&#8221;</p><p>&#8220;I don&#8217;t really have a system for organizing research.&#8221;</p><p>They could articulate the intellectual process. But the operational workflow remained largely invisible.</p><p>That distinction matters enormously for AI integration.</p><p>This pattern held across disciplines. Students from humanities backgrounds could articulate intellectual process, or the moves writers make.</p><p>Technical students could describe technical workflows, or the systems code moves through. But almost no one could clearly map their <em>writing workflow</em>, or the operational system through which their intellectual work actually happens.</p><p>In a world where AI supposedly makes writers obsolete, the ability to map operational workflows might be writers&#8217; most valuable professional skill. Not just describing what you do, but understanding the entire system through which information flows.</p><p><em>This year I&#8217;m releasing beta access to my course, Writing with Machines, which helps writers and content professionals take a workflow approach to AI. Click below for more info.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/writing-with-machines&quot;,&quot;text&quot;:&quot;Get Beta Access&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/writing-with-machines"><span>Get Beta Access</span></a></p><h2><strong>The Irony of Technical Education</strong></h2><p>Those computer engineers already think in workflows for their technical work. They understand systems, information flow, version control, deployment pipelines, etc. They can map how code moves through their development environment.</p><p>But they haven&#8217;t applied that systems thinking to their intellectual work.</p><p>Meanwhile, English majors have been trained to articulate intellectual process, but typically in isolated academic contexts. They haven&#8217;t learned to map workflows in operational terms like tools, formats, handoffs, and information design.</p><p>Content professionals (or most professional writers) bridge both worlds. We understand intellectual process AND operational workflow.</p><p>We know that &#8220;writing&#8221; isn&#8217;t just thinking work. It&#8217;s information moving through systems. </p><ul><li><p>Research lives in databases or note systems. </p></li><li><p>Drafts exist in specific tools with version histories. </p></li><li><p>Reviews happen through particular platforms. </p></li><li><p>Final outputs have required formats and distribution channels.</p></li></ul><p>We already think in workflows because your work requires it.</p><p>When I research and collaborate with organizations like Motorola or Hitachi Energy on their content operations, the people who excel in operational roles typically come from content backgrounds. Not because they&#8217;re more creative (though they might be), but because they can see how information moves through a system and articulate what happens at each stage &#8230; <strong>in operational terms, not just intellectual ones.</strong></p><p>That&#8217;s not a skill you automatically gain from learning to code or from studying literature. It&#8217;s a skill you develop from working in environments where content moves through systems and someone has to understand and improve those systems.</p><h2><strong>Why Workflow Mapping Matters Now</strong></h2><p>The ability to articulate intellectual process has always been valuable for writers. But AI integration requires workflow thinking &#8230; understanding the operational system, not just the thinking moves.</p><p>During the exercise, I asked students to identify friction points. Where do things slow down, where do you get stuck?</p><p>The English majors identified <em>intellectual</em> friction.</p><ul><li><p>&#8220;I hate writing conclusions.&#8221;</p></li><li><p>&#8220;The transition from outline to draft feels like starting over.&#8221;</p></li><li><p>&#8220;Synthesizing research into coherent arguments is hard.&#8221;</p></li></ul><p>The technical students identified <em>task</em> friction.</p><ul><li><p>&#8220;Finding sources takes too long.&#8221;</p></li><li><p>&#8220;Bibliography formatting is tedious.&#8221;</p></li><li><p>&#8220;Blank page anxiety.&#8221;</p></li></ul><p>But when I pushed them to think operationally, different patterns emerged.</p><p>One English major got specific about how she can write body paragraphs fine once she has her evidence organized. But she struggles with synthesizing everything into a conclusion that doesn&#8217;t just repeat what I already said.</p><p>This is process articulation. She knows the intellectual move that&#8217;s difficult (synthesis and elevation rather than summary). That&#8217;s valuable.</p><p>But then I asked: &#8220;Where are your body paragraphs when you&#8217;re trying to write the conclusion? What tool? What format? Can you see all your arguments at once, or are you scrolling?&#8221;</p><p>Long pause.</p><p>She then talked about how she writes each section separately, and by the time she gets to the conclusion, she&#8217;s forgotten what&#8217;s in the earlier sections &#8230; so she&#8217;s scrolling back and forth a lot.</p><p>Now we&#8217;re talking about workflow. The friction isn&#8217;t just intellectual. It&#8217;s operational. Information is organized in a way that makes synthesis difficult. The tool setup creates the problem as much as the intellectual challenge.</p><p>Compare that to &#8220;finding sources takes too long.&#8221; Also legitimate, but what&#8217;s the actual workflow friction? Is research scattered across multiple databases with different interfaces? Are you re-searching for things you&#8217;ve already found because notes aren&#8217;t organized? Is the problem identifying keywords, or is it that each source requires switching contexts and losing your train of thought?</p><p>Without mapping the actual workflow, AI integration becomes throwing technology at a vague sense of difficulty.</p><h2><strong>What AI Actually Needs From Writers</strong></h2><p>AI does one thing extraordinarily well: pattern-matching. It recognizes structures, suggests approaches, provides frameworks based on millions of examples.</p><p>But it can&#8217;t tell you which approach fits your specific situation unless you can map your actual workflow &#8230; not just the ideal process, but the operational reality.</p><p>The content professional who understands both the intellectual challenge (synthesis in conclusions) AND the workflow friction (information scattered across tools, requiring constant context-switching) can integrate AI strategically.</p><p>Maybe AI helps by consolidating key points from different sections. Maybe it suggests synthesis patterns. Maybe it&#8217;s not an AI solution at all. Maybe the workflow needs redesigning so information is visible when you need it.</p><p>The one who just says &#8220;help me write my conclusion&#8221; gets generic output because they haven&#8217;t mapped where the real friction lives.</p><p>This is why workflow mapping makes writers MORE valuable in AI environments, not less. Because effective AI integration requires:</p><ul><li><p>Understanding how work actually flows through tools and systems</p></li><li><p>Identifying where in operational workflows AI can help (and where it can&#8217;t)</p></li><li><p>Distinguishing intellectual challenges from operational friction</p></li><li><p>Redesigning workflows when needed, not just automating bad processes</p></li><li><p>Making both intellectual work and operational systems visible for improvement</p></li></ul><p>These are content operations skills. The kind of thinking you develop from working in environments where content moves through systems and someone has to understand, document, and improve those systems.</p><div class="pullquote"><p>The organizations that integrate AI effectively have people who can bridge technical capability with operational thinking. They need people who can build the systems AND people who can articulate the workflows those systems support.</p></div><p>The engineers in my class will build better chatbots than the English majors in a functional sense. No question. But content professionals already understand how to map the operational reality of knowledge work, not just describe the idealized process. Both are needed.</p><h2><strong>The Mixed Group Revelation</strong></h2><p>By the end of class, the groups that produced the richest insights probably won&#8217;t be the homogeneous ones. They will be the mixed groups where an English major, an engineer, and an education major talked through their processes.</p><p>The English major can learn that version control isn&#8217;t just for code. It&#8217;s a strategy for managing drafts. The engineer can discover that concept mapping could organize technical documentation. The education major can realize her strategies for making content accessible to children were universal design principles.</p><p>They can teach each other to see their own processes differently.</p><p>This mirrors what I see in successful content operations teams. The organizations that integrate AI effectively have people who can bridge technical capability with operational thinking. They need people who can build the systems AND people who can articulate the workflows those systems support.</p><p>Content professionals bring that operational thinking. Writers understand modularity, that writing happens in chunks (research phase, outline phase, drafting phase) and that AI works best with chunked information and targeted requests. You know the difference between extending capability at a specific friction point versus replacing judgment in complex synthesis.</p><p>You already think in workflows. You just might not have recognized it as a marketable skill.</p><h2><strong>What This Means for Your Work</strong></h2><p>The panic narrative says AI replaces writers because it can generate text. But text generation was never the whole job.</p><p>The actual job includes:</p><ul><li><p>Diagnosing what needs to be communicated and why</p></li><li><p>Mapping how information flows through operational systems</p></li><li><p>Identifying where automation helps versus where human judgment is essential</p></li><li><p>Designing workflows that support quality at scale</p></li><li><p>Maintaining meaning and accuracy across complex operations</p></li></ul><p>These are content operations skills. Operational thinking applied to knowledge work. Workflow mapping, not just process description.</p><p>When organizations implement AI successfully, it&#8217;s because someone can map the operational workflows clearly enough to know where AI fits. When implementations fail, it&#8217;s usually because they&#8217;re throwing AI at undefined processes or, worse, automating workflows that were already broken.</p><p>The computer engineers in my class will learn to map their writing workflows this semester. But they&#8217;re starting from further back than content professionals and writers who already understand that content moves through systems. </p><p>We need to recognize that workflow mapping as the strategic skill that makes you valuable in AI environments.</p><h2><strong>What&#8217;s Next</strong></h2><p>I&#8217;m teaching this class all semester as a laboratory for systematic AI integration. The students are building an AI tool for disaster communication, which is high-stakes work where accuracy matters and hallucinations aren&#8217;t acceptable.</p><p>I&#8217;ll be sharing my thoughts as we move through the semester.</p><p>Next week I want to talk about the difference between AI pattern-matching and human reasoning. This is key to understanding AI workflows.</p><p>But you can&#8217;t integrate AI effectively into workflows you haven&#8217;t mapped. </p><p><strong>If you want to move beyond casual AI use to systematic integration, I&#8217;m opening beta access to my Writing with Machines course for paid subscribers.</strong> It&#8217;s built around this exact challenge: mapping your actual workflows and integrating AI strategically at specific friction points. </p><p>The beta runs through this semester, giving you the complete framework while I refine it based on real-world feedback.</p><p><strong><a href="https://www.isophist.com/p/writing-with-machines">Learn more about the beta course &#8594; </a></strong><em>(paid subscribers only)</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>P.S. Want to test this for yourself? Try mapping your own workflow for a typical content project. If you can articulate not just what you do but why you do it at each stage, you&#8217;re already ahead of most people trying to integrate AI. If you find it harder than expected&#8212;you&#8217;re in good company, and it&#8217;s worth figuring out.</em></p>]]></content:encoded></item><item><title><![CDATA[Writing With Machines]]></title><description><![CDATA[Systematic AI Integration for Content Professionals]]></description><link>https://www.isophist.com/p/writing-with-machines</link><guid isPermaLink="false">https://www.isophist.com/p/writing-with-machines</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Sat, 17 Jan 2026 19:52:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!N9SQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!N9SQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!N9SQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 424w, https://substackcdn.com/image/fetch/$s_!N9SQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 848w, https://substackcdn.com/image/fetch/$s_!N9SQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 1272w, https://substackcdn.com/image/fetch/$s_!N9SQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!N9SQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png" width="661" height="372.17939609236237" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:317,&quot;width&quot;:563,&quot;resizeWidth&quot;:661,&quot;bytes&quot;:87169,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/184894552?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!N9SQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 424w, https://substackcdn.com/image/fetch/$s_!N9SQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 848w, https://substackcdn.com/image/fetch/$s_!N9SQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 1272w, https://substackcdn.com/image/fetch/$s_!N9SQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c9a3f0-008d-4d77-b159-178045fbeed7_563x317.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Paid subscribers now get first access to something I&#8217;ve been developing for the past year: a complete course on systematic AI integration for content professionals and technical writers.</p><p>This isn&#8217;t just about casual ChatGPT tips or prompt hacks. It&#8217;s about building the operational framework you need to integrate AI strategically into your actual work &#8230;the kind that makes you more effective without replacing your judgment or expertise.</p><p><strong>I&#8217;m making the beta version available to you paid subscribers, before the polished video course launches publicly. Here&#8217;s what that means and why it might interest you.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p>AI works best when you can articulate your process clearly enough to identify exactly where it helps.</p></div><h2>What This Course Actually Is</h2><p><em>Writing with Machines</em> teaches content professionals how to move from casual AI use to systematic integration. It&#8217;s built around a core insight: AI works best when you can articulate your process clearly enough to identify exactly where it helps.</p><p>The course covers ten chapters:</p><ol><li><p><strong>Understanding Your Workflow </strong>- Making your invisible intellectual process visible and identifying friction points</p></li><li><p><strong>What AI Actually Does</strong> - Pattern-matching vs. reasoning, and why the distinction matters</p></li><li><p><strong>The Five Information Types</strong> - Structuring content so AI can process it effectively</p></li><li><p><strong>Grounding AI in Knowledge </strong>- Building reliable knowledge bases instead of hoping for accurate responses</p></li><li><p><strong>Prompt Design</strong> - Moving beyond trial-and-error to systematic prompt design</p></li><li><p><strong>Pairing Prompts with Process</strong> - Matching specific prompts to specific workflow stages</p></li><li><p><strong>Style and Temperature</strong> - Controlling AI output characteristics deliberately</p></li><li><p><strong>The Structured Principles</strong> - How content operations thinking improves AI integration</p></li><li><p><strong>Building Your Prompt Taxonomy </strong>- Organizing prompts as operational assets</p></li><li><p> <strong>Workflow Redesign</strong> - The capstone where you transform an actual process using everything you&#8217;ve learned</p></li></ol><p>This isn&#8217;t just theory. Each chapter includes practical exercises, templates you can adapt, and frameworks you&#8217;ll use immediately. Chapter 10 has you redesign a real workflow from your work. That&#8217;s your deliverable.</p><h2>What Beta Access Means</h2><p>The beta course is complete. All ten chapters are written and functional. But it&#8217;s developmental. It&#8217;s text-based with exercises and templates rather than polished video lessons. You&#8217;re getting the systematic framework and practical tools, not production value.</p><p>Here&#8217;s what you get as a beta participant:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3fUE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3fUE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 424w, https://substackcdn.com/image/fetch/$s_!3fUE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 848w, https://substackcdn.com/image/fetch/$s_!3fUE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 1272w, https://substackcdn.com/image/fetch/$s_!3fUE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3fUE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png" width="1354" height="1018" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1018,&quot;width&quot;:1354,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:226915,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/184894552?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3fUE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 424w, https://substackcdn.com/image/fetch/$s_!3fUE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 848w, https://substackcdn.com/image/fetch/$s_!3fUE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 1272w, https://substackcdn.com/image/fetch/$s_!3fUE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9827c4bd-e4a0-48b6-a4e9-143c156c6fa9_1354x1018.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The exchange is simple: You get early access to systematic AI integration frameworks. I get insights that make the final course better. Both of us benefit.</p><h2>Why Beta Access Now</h2><p>I could wait until everything is polished and perfect before releasing anything. But that&#8217;s not how I work, and frankly, it&#8217;s not how good courses get built.</p><p>The best educational materials come from real interaction with real learners. </p><p>You&#8217;re professionals doing actual content work. You have real workflows, real constraints, real stakeholders. Your friction points are different. Your applications will be different. Your feedback will be invaluable.</p><p>Also, I&#8217;m learning things from my research and teaching this semester that are making the course better week by week. The disciplinary differences I wrote about in this week&#8217;s newsletter? That came from watching students map their workflows. Those insights are already improving how I teach the concepts.</p><p>If you wait for the polished version, you pay more and you miss the opportunity to shape it. If you join the beta, you get the systematic framework now (which you can use immediately) and you influence what the premium version becomes.</p><h3>Who This Course Is For</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3ln5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3ln5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 424w, https://substackcdn.com/image/fetch/$s_!3ln5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 848w, https://substackcdn.com/image/fetch/$s_!3ln5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 1272w, https://substackcdn.com/image/fetch/$s_!3ln5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3ln5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png" width="1434" height="1066" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1066,&quot;width&quot;:1434,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:226672,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/184894552?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3ln5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 424w, https://substackcdn.com/image/fetch/$s_!3ln5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 848w, https://substackcdn.com/image/fetch/$s_!3ln5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 1272w, https://substackcdn.com/image/fetch/$s_!3ln5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d69c7d7-107e-4eaf-b499-0210cec60a4b_1434x1066.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>What About Cost?</h3><p>If you&#8217;re already paying for Cyborgs Writing. This beta access is included in your subscription. No additional payment required.</p><p>The premium video course, when it launches, will cost significantly more. Early beta participants will get preferred pricing on that, but right now, your subscription covers this beta access completely.</p><p><strong>If cost is a barrier:</strong> I&#8217;m willing to provide access to people who can&#8217;t afford or prefer not to pay for the subscription but are genuinely committed to the process and feedback. If that&#8217;s you, email me at Lance.cummings@hey.com and we&#8217;ll work something out. The goal is to build an excellent course, not to restrict access unnecessarily.</p><p><strong>Access beta form below.</strong></p>
      <p>
          <a href="https://www.isophist.com/p/writing-with-machines">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Content Modeling My 2025 and Beyond]]></title><description><![CDATA[What building a knowledge graph is teaching me about my own work]]></description><link>https://www.isophist.com/p/content-modeling-my-2025-and-beyond</link><guid isPermaLink="false">https://www.isophist.com/p/content-modeling-my-2025-and-beyond</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 06 Jan 2026 15:30:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2wfL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2wfL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2wfL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!2wfL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!2wfL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!2wfL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2wfL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6909164,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/183673837?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c7b5160-a84b-4545-88d4-f596184719f9_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2wfL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!2wfL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!2wfL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!2wfL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250f1425-3ffd-4a23-b280-833d52c0ebfa_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image generated by <a href="https://try.gamma.app/ka5vvp4ov8sj">Gamma Ai</a>.</figcaption></figure></div><p>I spent the last week of December doing something I hadn&#8217;t planned: modeling my own Substack archive as a knowledge graph.</p><p>I&#8217;ve been writing about AI-ready content, structured knowledge, and retrieval systems for two years now, and I&#8217;ve been wanting to see if I could structure my posts in a way that would make them more useful &#8230; to me, to readers, and eventually to AI systems that might help people navigate the archive.</p><p>What I didn&#8217;t expect was how much the modeling itself became a form of reflection.</p><p>So as my first post of 2026, I thought I&#8217;d give you some of my reflections on where I&#8217;m going with this Substack and how this knowledge graph exercise is helping me in the process.</p><h2>The Year of Narrowing</h2><p>First, some context. 2025 was a year of cutting away.</p><p>When I started writing about AI in education and technical communication, fewer people were doing it. That&#8217;s no longer true. Everyone writes about AI and plagiarism now. Everyone has opinions about classroom policy. The general conversation doesn&#8217;t need another voice.</p><p>And, well, if I&#8217;m honest, my ADHD brain kind of finds those discussions a bit boring these days.</p><p>So I began focusing on other things in information design by narrowing in on questions where my background in rhetoric and professional writing gives me something distinctive to say: </p><ul><li><p>How does retrieval-augmented generation work from a compositionist&#8217;s perspective? </p></li><li><p>What can classical rhetorical frameworks tell us about prompt design? </p></li><li><p>How do we test content systems, not just prompts?</p></li></ul><p>Posts like <a href="https://www.isophist.com/p/is-structured-prompting-dead">&#8220;Is Structured Prompting Dead?&#8221;</a> and <a href="https://www.isophist.com/p/testing-as-rhetorical-proof">&#8220;Testing as Rhetorical Proof&#8221;</a> came from this narrowing. </p><p>So did the Deep Reading podcast episodes on <a href="https://www.isophist.com/p/what-the-ancient-art-of-organized">topoi and AI hallucination</a>. </p><p>I wrote less often but went deeper when I did.</p><p>This focus also shaped my teaching and research. I&#8217;ve got a couple academic articles in the works, and I&#8217;m starting this semester with an interdisciplinary grant project, where students and faculty from English, Sociology, and Computer Engineering are building a knowledge graph for a crisis food communication tool. </p><p>The structured content work I&#8217;ve been exploring publicly is now something I&#8217;m building with students, in real time, with actual users (and trying to bring into scholarly conversations).</p><p>My goal as an online writer has always been to bridge the space between the workplace and academia. In the world of AI, this is more important than ever.</p><h2>What the Model Revealed</h2><p>Back to the knowledge graph experiment.</p><p>I used Claude along with <a href="https://neo4j.com/blog/developer/neo4j-data-modeling-mcp-server/">Neo4j&#8217;s data modeling MCP server</a>, which is essentially a tool that helps you design graph structures. </p><p>For those unfamiliar: a knowledge graph represents information as nodes (things) connected by relationships (how those things relate). Instead of storing content as documents, you store it as a web of connected entities.</p><p>I started by asking: What are the meaningful units in my archive? Posts, obviously. But what else?</p><p>The first interesting decision was distinguishing <strong>Concepts</strong> from <strong>Rhetorical Frameworks</strong>. Concepts are the ideas I write about, for example structured prompting, RAG, AI literacy, vibe coding. Rhetorical Frameworks are the lenses I use to interpret those ideas&#8212;kairos, topoi, stasis theory, rhetorical proof.</p><p>I could have lumped these together. They&#8217;re all &#8220;topics&#8221; in some sense. But separating them forced me to articulate how the classical rhetoric is the interpretive layer through which I read everything else. </p><ul><li><p>Kairos informs how I think about vibe coding. </p></li><li><p>Topoi shapes my understanding of RAG. </p></li><li><p>Memory connects to knowledge graphs.</p></li></ul><p>Mapping these relationships made explicit what had been implicit across dozens of posts.</p><p>The second decision was adding <strong>Claims</strong>. Most writers think of posts as being &#8220;about&#8221; topics. But when I added a node type for Claims&#8212;with properties like <em>statement</em>, <em>claim type</em>, and <em>confidence level</em>, each post became a collection of assertions at various stages of development. Some claims I state definitively. Others I mark as provisional. A few are speculative, ideas I&#8217;m testing rather than defending.</p><p>Here&#8217;s the knowledge layer of the model:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5iPI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5iPI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 424w, https://substackcdn.com/image/fetch/$s_!5iPI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 848w, https://substackcdn.com/image/fetch/$s_!5iPI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 1272w, https://substackcdn.com/image/fetch/$s_!5iPI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5iPI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png" width="1312" height="1248" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1248,&quot;width&quot;:1312,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:122981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/183673837?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb077cde-0e7e-46ea-8940-3e9e6d49bb7b_1312x1248.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5iPI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 424w, https://substackcdn.com/image/fetch/$s_!5iPI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 848w, https://substackcdn.com/image/fetch/$s_!5iPI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 1272w, https://substackcdn.com/image/fetch/$s_!5iPI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ff79ee4-76bb-4509-a2c2-51e7d64a08be_1312x1248.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Claude generated graph in Mermaid....</figcaption></figure></div><p></p><p>This helps me understand that my archive isn&#8217;t just a collection of articles about topics. It&#8217;s a slowly developing argument, with claims that build on earlier claims, supported (or not yet supported) by evidence, attached to concepts that relate to each other in particular ways.</p><p>The EXTENDS relationship between claims might be the most useful part. My December post on rhetorical proof extends the prompt testing work from November. </p><p>Seeing that connection mapped changes how I understand what I&#8217;ve been doing. Not as isolated posts, but intellectual development over time.</p><div class="pullquote"><p>When we create knowledge graphs for retrieval-augmented generation (and when we decide how to chunk content, what entities to extract, how to represent relationships) we&#8217;re doing philosophy whether we recognize it or not.</p></div><h2>The Ontology Encodes the Epistemology</h2><p>The usefulness of this activity goes beyond my own navel-gazing.</p><p>Knowledge graphs will become a key component to understand what we know (and what machines know) as AI becomes more ubiquitous.</p><p>Every knowledge graph encodes assumptions. </p><ul><li><p>What becomes a node? </p></li><li><p>What becomes a relationship? </p></li><li><p>What properties matter? </p></li></ul><p>These aren&#8217;t neutral technical decisions. They&#8217;re interpretive choices about what counts, what connects, what gets left out.</p><p>The model I built privileges argumentation&#8212;claims require evidence, ideas have lineages. It privileges rhetorical tradition as a distinct layer of interpretation. </p><p>Someone else modeling the same archive might structure it entirely differently. A computer scientist might emphasize technical concepts and tool relationships. A historian might organize by period and influence.</p><p>The structure reflects a worldview.</p><p>This has implications for the AI systems we&#8217;re all building. When we create knowledge graphs for retrieval-augmented generation (and when we decide how to chunk content, what entities to extract, how to represent relationships) we&#8217;re doing philosophy whether we recognize it or not. </p><p>The ontology (or how the graph is built) shapes what the system can know and how it can know it.</p><p>But as an academic, I also understand the impulse to structure knowledge exists alongside the recognition that any structure is partial. A knowledge graph doesn&#8217;t capture knowledge. It creates a frame for retrieval. What lies outside the frame matters too.</p><p>There&#8217;s something almost mystical about this. The more carefully you map what you know, the more visible the boundaries of your knowing become. Structure reveals mystery rather than eliminating it. </p><p>The graph doesn&#8217;t contain the territory&#8212;it just makes certain paths through the territory easier to find.</p><p>This is where our own cultural and religious backgrounds, many of which are invisible, become key to understanding how we set up AI systems.</p><p>I don&#8217;t have this fully worked out. It&#8217;s one of the threads I want to pull on this year.</p><ul><li><p>How do philosophical and contemplative traditions inform how we design these systems? </p></li><li><p>What would it mean to build a knowledge graph that acknowledges its own limits?</p></li></ul><p>I&#8217;ve been wanting to explore this for a while now. Focusing my work even tighter around these topics and arguments give me the opportunity to go deeper in 2026.</p><h2>What&#8217;s Ahead</h2><p>For 2025, a few things:</p><p><strong>I&#8217;ll continue the work on structured content, knowledge graphs, and testing systems</strong>&#8212;but now with the grant project providing a concrete laboratory. Students will be building something real. I&#8217;ll be writing about what we learn.</p><p><strong>The Deep Reading podcast will keep going.</strong> Research that connects AI systems to rhetoric, history, and context. Short episodes, but substantive.</p><p><strong>And I&#8217;ll be developing my course, Writing with Machines, </strong>which takes the prompt operations material I&#8217;ve been sharing and structures it into a learning path that helps writers and teams integrate AI in ways that enhance expertise without taking away agency.</p><div><hr></div><p><em>Paid subscribers get access to the beta version of this course. I&#8217;ll be testing new material with them before it becomes a more polished offering through Firehead Digital Communications. If you want to dig into the structured prompting work and help me think through what&#8217;s useful, that&#8217;s the way in. More to come on this soon.</em></p><p><em>The university doesn&#8217;t provide resources for this kind of public scholarship. No course releases, no dedicated funding. Paid subscriptions help me continue the work&#8212;and I&#8217;m genuinely grateful for every one of them.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>One last thought.</p><p>Building this knowledge graph was supposed to be an organizational project. It turned into something more like an examination of what I actually believe about my own work. The modeling surfaced assumptions I hadn&#8217;t articulated, connections I hadn&#8217;t named, and limits I hadn&#8217;t acknowledged.</p><p>Maybe that&#8217;s what structured content does at its best. Not capturing knowledge, but creating conditions for reflection&#8212;for ourselves, and eventually for the systems we build alongside us.</p><p>Here&#8217;s to a year of structuring, and of honoring what escapes the structure.</p><h2><strong>Metadata as Practice</strong></h2><p>One thing I&#8217;m committing to this year: tagging my own posts with the structure I&#8217;ve been writing about. Not just talking about AI-ready content&#8212;making it.</p><p>Here&#8217;s the metadata for this post:</p><ul><li><p><strong>Concepts:</strong> Knowledge Graphs, AI-Ready Content</p></li><li><p><strong>Framework:</strong> Memory, ontology</p></li><li><p><strong>Tools:</strong> Claude, Neo4j MCP </p></li><li><p><strong>Builds on:</strong> &#8220;Is Structured Prompting Dead?&#8221;, &#8220;Testing as Rhetorical Proof&#8221;</p></li></ul><p><strong>Claims I&#8217;m making:</strong></p><ol><li><p>The ontology encodes the epistemology <em>(definitive)</em></p></li><li><p>Building a knowledge graph is a form of reflection <em>(provisional)</em></p></li><li><p>Structure reveals mystery rather than eliminating it <em>(speculative)</em></p></li></ol><p>The confidence levels matter. </p><ul><li><p>I&#8217;m certain about #1. </p></li><li><p>I believe #2 but want more evidence. </p></li><li><p>#3 is something I&#8217;m testing&#8212;it might not survive contact with further thinking.</p></li></ul><p>Over time, these tags will let me (and eventually you) trace how ideas develop across posts. That&#8217;s the hope, anyway.</p><p>What do you think?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/content-modeling-my-2025-and-beyond/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/content-modeling-my-2025-and-beyond/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Testing as Rhetorical Proof]]></title><description><![CDATA[How the library of Alexandria might judge good prompts]]></description><link>https://www.isophist.com/p/testing-as-rhetorical-proof</link><guid isPermaLink="false">https://www.isophist.com/p/testing-as-rhetorical-proof</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Mon, 22 Dec 2025 15:15:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9O1-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9O1-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9O1-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 424w, https://substackcdn.com/image/fetch/$s_!9O1-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 848w, https://substackcdn.com/image/fetch/$s_!9O1-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!9O1-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9O1-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png" width="1456" height="1033" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1033,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:609260,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/182110201?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9O1-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 424w, https://substackcdn.com/image/fetch/$s_!9O1-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 848w, https://substackcdn.com/image/fetch/$s_!9O1-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!9O1-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38d208c1-cc13-47f8-b284-14827e7ad6c3_1748x1240.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last semester, my students and I built a writing feedback chatbot for our technical communication course. In testing, it worked beautifully. Clear, specific feedback that maintained professional warmth. We deployed it.</p><p>Within two weeks, students started reporting inconsistent experiences. The same submission structure that earned detailed feedback on Monday produced superficial responses on Wednesday. </p><p>One student showed me screenshots. For example, the chatbot had praised her conclusion as &#8220;effectively synthesized&#8221; in the morning, then flagged the identical paragraph as &#8220;needing stronger connections&#8221; that afternoon. Same prompt. Same model version. Same student text.</p><p>This isn&#8217;t a bug. It&#8217;s just the way it is when working with language models. And it&#8217;s why prompt testing requires something more rigorous than &#8220;try it and see if it looks good.&#8221;</p><h3>Evaluation as Craft</h3><p>The ancient Greeks had a term for what we need: <em>kritik&#275; techn&#275;</em> &#8230; Or the art of judgment. The word <em>kritik&#275;</em> comes from <em>krin&#333;</em> (to separate, to decide), and <em>techn&#275;</em> means a teachable craft. Together, they describe a disciplined practice for evaluating the worth, correctness, and fitness of language.</p><p>The grammarians who were a part of the library of Alexandria developed kritik&#275; into a systematic discipline that created repeatable procedures for evaluating texts. Working with multiple manuscript copies of Homer, they faced a problem familiar to anyone testing AI outputs: variant versions of the same content, with no obvious way to determine which was best.</p><p>Their solution was to operationalize judgment. They established criteria, collected evidence across variants, applied standards consistently, and recorded their decisions so others could follow the reasoning. Judgment became a craft that could be taught, reproduced, and improved.</p><p>This is exactly what prompt testing requires. When we evaluate AI outputs, we&#8217;re not asking &#8220;does this sound good?&#8221; We&#8217;re asking whether the outputs meet specific criteria reliably enough for a particular purpose. </p><p>That question demands a method&#8212;articulated standards, systematic procedures, transparent documentation.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pMWi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pMWi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!pMWi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!pMWi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!pMWi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pMWi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5519197,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/182110201?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pMWi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!pMWi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!pMWi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!pMWi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9326e95-a178-4b22-a7c4-a361efdd9cec_2752x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image generated by <a href="http://aff.gammahttps://try.gamma.app/ka5vvp4ov8sj">Gamma.ai</a></figcaption></figure></div><h3>Why Structured Prompting Demands Systematic Testing</h3><p>When people claim structured prompting is dead, they&#8217;re usually working with single interactions or more dialogic collaborations. Ask a question, get an answer, move on (or continue working on that one instance). In that context, casual prompting often works fine.</p><p>But the moment you&#8217;re building something that needs to perform reliably across users, sessions, and contexts, you&#8217;re no longer in single-interaction territory. </p><p>This could be:</p><ul><li><p>a classroom assistant</p></li><li><p>a documentation helper, or</p></li><li><p>a content generation workflow.</p></li></ul><p>You&#8217;re building a system. And systems require consistency that casual prompting can&#8217;t guarantee.</p><p>My research on prompt format bears this out. <a href="https://www.isophist.com/p/is-structured-prompting-dead">When I tested the same complex task across four different structures</a>, the outputs varied dramatically. Not just in efficiency (processing time ranged from 64 to 120 seconds) but in character. The unstructured prompt produced exploratory, wandering responses. JSON triggered mechanical, compliance-document prose. Natural structure with clear sections generated focused, efficient communication.</p><p>Each format created different statistical conditions for the model&#8217;s token prediction. JSON tokens co-occur with technical documentation patterns in training data, so generating JSON-formatted input increases the probability of formal, exhaustive output patterns. Unstructured conversational input co-occurs with exploratory discussion, so the model follows those statistical tendencies.</p><p>Structure gives you leverage over consistency, but only if you verify that your structure actually produces the consistency you need. And structure isn&#8217;t the only variable. Temperature settings, model selection, and context length all affect output character. (I explore temperature&#8217;s effects in <a href="https://www.isophist.com/p/understanding-temperature-and-style">a separate lesson</a>.) Testing helps you understand how these variables interact for your specific use case.</p><h3>Testing Prompts vs. Testing Code</h3><p>When we test software, we&#8217;re verifying logical operations. Given input X, does the function return output Y? The relationship is deterministic. Run the test a thousand times, get the same result a thousand times.</p><p>Prompt testing operates on different principles. We&#8217;re examining rhetorical reliability, not verifying logic. Does the relationship we&#8217;ve established between human intention and machine interpretation remain stable across time, context, and varied inputs?</p><p>This distinction matters because it changes what we&#8217;re looking for. Code either passes or fails. Prompts exist on a spectrum of reliability, and our job is to understand where on that spectrum a given prompt sits for a given purpose.</p><p>The writing feedback chatbot didn&#8217;t &#8220;fail&#8221; in any binary sense. It produced plausible feedback every time. The question was whether that feedback remained consistent enough to be pedagogically useful &#8230; And whether students could trust that the evaluation criteria were being applied reliably rather than arbitrarily.</p><p><strong>That&#8217;s a question of judgment, not logic. And answering it requires a method for judgment.</strong></p><h3>What We&#8217;re Judging</h3><p>When you test a prompt systematically, you&#8217;re evaluating three aspects of the human-AI collaboration you&#8217;ve created.</p><p><strong>Stance stability.</strong> Every prompt establishes a rhetorical stance, or a position from which the AI speaks. &#8220;You are a writing tutor who provides constructive feedback focused on argument structure and evidence use&#8221; isn&#8217;t just an instruction. It&#8217;s establishing a consistent voice and perspective. </p><p>But does that stance actually persist?</p><p>With our classroom chatbot, testing revealed that the constructive-tutor stance held firm for the first few exchanges in a conversation, then gradually drifted toward generic encouragement. </p><p>This could happen because the model&#8217;s training data contains patterns where tutoring interactions soften over time, or because earlier instructions lose influence as the context window fills with conversation. </p><p>Whatever the mechanism, the effect was measurable: stance drift under extended use. Testing helped us identify where drift occurred so we could add stabilizing elements&#8212;periodic reinforcement of the evaluative criteria, structural markers that maintained the rigorous-feedback pattern.</p><p><strong>Interpretive framework reliability.</strong> Your prompt doesn&#8217;t just tell the AI what to do. It shapes how inputs get processed. When our chatbot prompt said &#8220;evaluate based on the technical communication rubric criteria,&#8221; we were creating conditions where rubric-related language would influence the output. But those conditions had gaps we didn&#8217;t anticipate.</p><p>The rubric worked well for standard assignments because the model had seen similar patterns. But when students submitted creative approaches, like an infographic, the statistical patterns broke down. The model couldn&#8217;t match rubric language to unfamiliar input formats, so it defaulted to surface-level observations about grammar and formatting. Testing with diverse input types revealed these blind spots. The fix wasn&#8217;t clarifying instructions&#8212;it was providing examples of the rubric applied to non-standard formats, giving the model patterns to match against.</p><p><strong>Collaborative boundaries.</strong> Every prompt creates what I think of as a collaborative space, or the zone where human intention and machine capability overlap productively. Testing maps the edges of this space.</p><p>For the classroom chatbot, we needed to know: What types of student writing produce useful feedback? Where does the feedback quality drop off? What submission characteristics cause confusion or generic responses? Which edge cases does the prompt handle gracefully, and which break it entirely?</p><p>These boundaries aren&#8217;t obvious from the prompt text. They emerge only through running varied inputs through the system and observing where reliability holds and where it fractures.</p><p>Knowing <em>what</em> to judge is only half the challenge. The Alexandrian grammarians understood this. They didn&#8217;t just identify what made a text authentic or well-formed. They also developed procedures for making those judgments systematically: comparing variants, marking uncertainties, documenting reasoning.</p><p>Prompt testing requires both dimensions. We need criteria for evaluation&#8212;what counts as stable stance, reliable interpretation, appropriate boundaries. And we need procedures for applying those criteria.</p><h2>The Rhetorical Appeals as Evaluation Criteria</h2><p>The <a href="https://continuum.fas.harvard.edu/homers-text-and-language/1-the-quest-for-a-definitive-text-of-homer-evidence-from-the-homeric-scholia-and-beyond/">Alexandrian grammarians </a>faced a problem we might recognize: they had no original to compare against. When scholars assessed a line of Homer, they weren&#8217;t checking it against some authoritative master copy&#8212;none existed. Homer was oral tradition committed to writing centuries after composition, and every manuscript was a copy of copies, each with its own variants and corruptions.</p><p>So how did these scholars develop criteria for judgment? By immersion in the corpus itself. They studied patterns across many manuscripts, inferring what Homeric diction typically looked like, identifying metrical conventions from the poems themselves, developing a sense of stylistic consistency through deep familiarity with the work. They&#8217;re standards emerged from the body of texts, then got applied back to evaluate individual passages.</p><p>We&#8217;re doing something similar with AI outputs. There&#8217;s no &#8220;ideal response&#8221; to compare against&#8212;just multiple outputs from which we infer what &#8220;good&#8221; looks like for a particular purpose. Our criteria emerge from examining what works, identifying patterns that characterize successful responses, and then applying those standards to evaluate new outputs.</p><p>But rhetoric offers a framework that accelerates this process: <a href="https://en.wikipedia.org/wiki/Modes_of_persuasion">the three appeals.</a> Aristotle identified ethos (credibility), pathos (emotional engagement), and logos (reasoning) as the fundamental dimensions of persuasive communication. These aren&#8217;t just persuasion techniques&#8212;they&#8217;re categories for evaluating whether communication works.</p><p>We&#8217;ve applied them to speeches, text, digital media &#8230; And now AI outputs.</p><p>When we adapt them for prompt testing, they become three lenses for examining output quality.</p><h3>Ethos Testing: Can the Output Be Trusted?</h3><p>Ethos in classical rhetoric establishes the speaker&#8217;s credibility and character. For AI outputs, we&#8217;re not assessing whether the model has credibility (it doesn&#8217;t, inherently), but whether the outputs are trustworthy enough for the intended purpose.</p><p>Trustworthiness breaks down into two components: consistency and accuracy.</p><p><strong>Consistency</strong> asks whether the same prompt produces comparable outputs across multiple runs. This matters because inconsistent outputs can&#8217;t be trusted for systematic use. If a documentation prompt generates comprehensive coverage on one run and superficial summaries on the next, you can&#8217;t build a workflow around it.</p><p>Testing for consistency is straightforward: run the same prompt with the same input multiple times and compare the outputs. But &#8220;same&#8221; doesn&#8217;t mean identical. The question is whether variation falls within acceptable bounds for your purpose.</p><p>Consider a blog title generator. Testing the same article summary five times might produce five different titles&#8212;but if all five maintain brand voice, include relevant keywords, and target the right audience, that variation is a feature for brainstorming purposes. The prompt has sufficient ethos for generating options.</p><p>Contrast that with a product description prompt. If testing reveals 30% variation in which technical specifications get mentioned, the prompt lacks the consistency required for that task. Product descriptions need completeness, not creativity. The prompt would need explicit checklists and verification steps until testing shows reliable coverage of required elements.</p><p><strong>Accuracy</strong> asks whether the outputs are factually correct and appropriately grounded. This is particularly critical for prompts that draw on domain knowledge or make claims that could be verified.</p><p>Testing for accuracy requires reference points&#8212;either human expert review or comparison against known-correct information. For our classroom chatbot, we tested accuracy by having instructors evaluate whether the AI&#8217;s feedback aligned with how they would assess the same submissions. Where the AI and instructors diverged significantly, we examined whether the prompt&#8217;s criteria were unclear or whether the model was introducing its own evaluation standards.</p><p>The ethos question for any prompt is: <em>Can I trust this output enough to use it for its intended purpose?</em> Testing answers that question with evidence rather than hope.</p><h3>Pathos Testing: Is the Emotional Register Appropriate?</h3><p>Pathos in classical rhetoric involves emotional appeal&#8212;engaging the audience&#8217;s feelings appropriately for the context. For AI outputs, we&#8217;re testing whether the tone and emotional register remain appropriate across different inputs and contexts.</p><p>This matters more than many practitioners realize. Tone inconsistency can undermine otherwise solid content. A customer service prompt that sounds helpful for simple questions but becomes condescending for complex ones will damage relationships regardless of how accurate the information is.</p><p>Imagine an automated feedback system for student writing. The prompt might maintain an encouraging tone when reviewing strong work but shift to patronizing reassurance for weaker submissions. Phrases like &#8220;You tried your best&#8221; and &#8220;Don&#8217;t worry, writing is hard&#8221; appearing only in responses to struggling students would unintentionally signal that the system had already judged them as less capable.</p><p>In this scenario, the prompt&#8217;s ethos could be fine&#8212;consistent, accurate feedback. But its pathos would be off, treating different students with different levels of respect based on submission quality.</p><p>Testing pathos requires diverse inputs that trigger different emotional contexts. For a feedback system, this means testing with:</p><ul><li><p>Strong submissions (does it avoid excessive praise that might seem hollow?)</p></li><li><p>Weak submissions (does it maintain respect while identifying problems?)</p></li><li><p>Frustrated student language (does it respond with patience rather than matching the frustration?)</p></li><li><p>Confused questions (does it clarify without condescension?)</p></li></ul><p>For a customer service prompt, you&#8217;d test across complaint types, customer tones, and issue severity. For documentation, you might test whether the prompt maintains appropriate professional distance when explaining both mundane features and exciting new capabilities.</p><p>The pathos question is: <em>Does the emotional register remain appropriate across the full range of likely inputs?</em> Testing reveals where tone calibration breaks down.</p><h3>Logos Testing: Is the Reasoning Sound?</h3><p>Logos in classical rhetoric involves logical argument, or the structure and validity of reasoning. For AI outputs, we&#8217;re testing whether the logical framework established in the prompt actually governs how outputs get generated.</p><p>This goes beyond checking factual accuracy (that&#8217;s ethos). Logos testing examines whether the prompt&#8217;s stated priorities, decision rules, and evaluation criteria actually shape the output&#8212;or whether they get overridden by other patterns in the model&#8217;s training.</p><p>Consider a documentation prompt that claims to prioritize accuracy but consistently chooses simpler explanations over precise ones. The prompt might include both &#8220;maintain technical accuracy&#8221; and &#8220;explain in accessible language.&#8221; In practice, accessibility could win out every tim&#8212;the AI sacrificing precision for readability without letting the user know.</p><p>This wouldn&#8217;t be a failure of the model. It would be a logical contradiction in the prompt that testing reveals. &#8220;Accurate and accessible&#8221; sounds reasonable until you encounter cases where accuracy requires technical precision that isn&#8217;t accessible. Without guidance for resolving that tension, the model could default to patterns from its training data, where accessible explanations are more common than technically precise ones.</p><p>Testing for logos means deliberately creating inputs that force your prompt&#8217;s priorities into conflict:</p><ul><li><p>If your prompt says &#8220;be concise but thorough,&#8221; test with topics that can&#8217;t be covered both concisely and thoroughly. Which wins?</p></li><li><p>If your prompt prioritizes &#8220;user benefit&#8221; and &#8220;technical accuracy,&#8221; test with features where the accurate description doesn&#8217;t sound beneficial. What happens?</p></li><li><p>If your prompt establishes an evaluation hierarchy (&#8220;first check X, then Y, then Z&#8221;), test with inputs where X and Y suggest different conclusions. Does the hierarchy hold?</p></li></ul><p>The logos question is: <em>When the prompt&#8217;s instructions compete, does the output resolve conflicts the way I intend?</em> Testing surfaces hidden contradictions and reveals which instructions actually govern behavior.</p><h3>Combining the Three Lenses</h3><p>Most prompt testing requires all three lenses, but their relative weight depends on purpose.</p><p><strong>For a research summarization prompt</strong>, logos dominates. You need the reasoning structure to govern output reliably. Ethos matters for accuracy, but pathos is less critical since emotional register in research summaries is relatively narrow.</p><p><strong>For a customer-facing chatbot,</strong> pathos may matter most. Users will forgive minor inconsistencies or occasional reasoning gaps if the tone feels right. They won&#8217;t forgive condescension or inappropriate cheerfulness when they&#8217;re frustrated.</p><p><strong>For a compliance documentation prompt, ethos is paramount.</strong> Consistency and accuracy are non-negotiable. Pathos and logos matter, but trustworthiness is the threshold requirement.</p><p>When designing your testing approach, identify which appeals are critical for your use case and weight your testing accordingly. A prompt can have strong ethos but weak pathos (consistent and accurate but tonally inappropriate), or strong logos but weak ethos (sound reasoning but inconsistent execution). Testing across all three reveals the full picture.</p><h2>From Criteria to Procedures</h2><p>Knowing what to evaluate doesn&#8217;t tell you how to evaluate it. The Alexandrian grammarians understood this. They developed not just standards for judgment but systematic procedures for applying those standards: methods for comparing variants, marking uncertainties, and documenting reasoning so others could follow or challenge their conclusions.</p><p>These procedures translate surprisingly well to prompt testing. The Alexandrians were solving a version of our problem: multiple variant texts, no definitive original, and the need for judgments that could be taught, reproduced, and defended.</p><h3>Recension: Multi-Run Comparison</h3><p><a href="https://bmcr.brynmawr.edu/2019/2019.04.35/">The Alexandrians framed this work as </a><em><a href="https://bmcr.brynmawr.edu/2019/2019.04.35/">diorth&#333;sis</a></em><a href="https://bmcr.brynmawr.edu/2019/2019.04.35/"> and </a><em><a href="https://bmcr.brynmawr.edu/2019/2019.04.35/">ekdosis</a></em><a href="https://bmcr.brynmawr.edu/2019/2019.04.35/">.</a> They collated multiple manuscript &#8220;witnesses&#8221; to identify variants, marked doubtful lines, and recorded their comparative reasoning in commentaries. Rather than trusting any single copy, they corrected the text and then issued a stabilized edition, choosing readings based on consistent patterns across the evidence.</p><p>For prompt testing, this becomes multi-run comparison. Never evaluate a prompt based on a single output. Run the same prompt with the same input multiple times and compare results.</p><p>This sounds obvious, but it&#8217;s surprisingly rare in practice. Most prompt development follows a pattern: write prompt, test once, adjust if the output looks wrong, test once more, deploy. That&#8217;s like Alexandrians looking at one manuscript and declaring it authoritative.</p><p>Multi-run comparison reveals what single tests hide. When I tested my four prompt formats, I didn&#8217;t just run each once. Each variation ran under controlled conditions, with metrics tracked across runs. The patterns emerged from comparison, not from any single output.</p><p>For practical testing, I recommend a minimum of three runs for informal evaluation and five or more for anything you&#8217;ll deploy in production. Compare outputs looking for:</p><p><strong>Structural consistency.</strong> Do the outputs follow the same organization? If your prompt specifies a format, does that format hold across runs, or does it drift?</p><p><strong>Coverage variation.</strong> Do all runs address the same key points, or do some outputs omit information that others include? For the blog title generator, variation in titles is fine. For product descriptions, variation in which features get mentioned is a problem.</p><p><strong>Tonal range.</strong> Do all outputs stay within the same emotional register, or do some runs produce noticeably different tones? This is your pathos check.</p><p><strong>Priority adherence.</strong> When the prompt contains competing instructions, do all runs resolve the conflict the same way? This is your logos check.</p><p>The goal isn&#8217;t identical outputs. That&#8217;s neither possible nor desirable. The goal is understanding the range of variation your prompt produces and determining whether that range falls within acceptable bounds for your purpose.</p><h3>Athetesis: Marking Uncertainty</h3><p>When Alexandrian grammarians encountered lines they suspected were spurious or corrupted, they didn&#8217;t simply delete them. They marked them with an <em>obelos</em>&#8212;a horizontal line indicating doubt. This kept them visible in the text. Future scholars could see the judgment, assess the reasoning, and reach their own conclusions.</p><p>This practice, called <em>athetesis</em>, prioritized transparency over tidiness. A marked line told readers: &#8220;This is questionable, but I&#8217;m preserving it so you can evaluate my judgment.&#8221;</p><p>For prompt testing, this becomes uncertainty flagging. When you identify problems in AI outputs, mark them explicitly rather than silently fixing them or discarding the output entirely.</p><p>This matters for two reasons. First, patterns of uncertainty reveal prompt weaknesses. If you&#8217;re consistently flagging the same type of problem, you&#8217;ve identified where your prompt needs revision. Silent fixes hide these patterns.</p><p>Second, flagged outputs become training data for your own judgment. Over time, a collection of marked outputs teaches you (and your team) what to watch for. The Alexandrians built <em>scholia</em> (or commentary traditions) around their marked texts. You can build similar institutional knowledge around flagged AI outputs.</p><p>A simple flagging system might include markers like:</p><ul><li><p><strong>H</strong> for hallucination (unsupported claims or fabricated details)</p></li><li><p><strong>T</strong> for tone problems (inappropriate emotional register)</p></li><li><p><strong>I</strong> for incompleteness (missing required elements)</p></li><li><p><strong>C</strong> for contradiction (conflicts with prompt instructions or internal inconsistency)</p></li><li><p><strong>D</strong> for drift (departure from established stance or format)</p></li></ul><p>The specific markers matter less than consistent use. Pick a system and apply it across all your testing so patterns become visible.</p><h3>Scholia: Documenting Your Reasoning</h3><p>The Alexandrians didn&#8217;t just mark problems&#8212;they explained their judgments. Marginal notes called <em>scholia</em> documented why a line was suspect, what alternatives existed, and how the editor had reasoned through the decision. These annotations accumulated over generations, creating a scholarly conversation around the text.</p><p>For prompt testing, this becomes documented evaluation. Don&#8217;t just record that an output passed or failed&#8212;record why.</p><p>This is where most testing falls apart. Teams run prompts, glance at outputs, make a gut judgment, and move on. Nothing gets written down. A month later, no one remembers why certain prompt versions were rejected or what problems the current version was designed to solve.</p><p>Documented evaluation doesn&#8217;t require elaborate systems. A simple log capturing the following for each test proves valuable:</p><p><strong>The input used.</strong> What specific content did you feed the prompt? Save it so tests can be reproduced.</p><p><strong>The output received.</strong> Keep the full output, not just a summary or judgment.</p><p><strong>Your assessment.</strong> Did it pass or fail on ethos, pathos, logos? What specific problems did you identify? Use your flagging system.</p><p><strong>Your reasoning.</strong> Why did you judge it this way? What would have made it better? This is the scholia&#8212;the part that teaches future evaluators (including future you) how to think about the prompt&#8217;s performance.</p><p>When you revise a prompt based on testing, document what you changed and why. Link the revision to the specific test failures that motivated it. This creates a trail that makes prompt development cumulative rather than circular.</p><h3>Putting Procedures into Practice</h3><p>These Alexandrian procedures provide the methodological foundation. But you still need practical workflows for implementing them. The approach you choose depends on your technical resources and scale.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Paid subscribers get the full methodology below, plus early access to <em>Writing with Machines</em>&#8212;my course on building reliable AI writing workflows (Beta coming in January).</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>
      <p>
          <a href="https://www.isophist.com/p/testing-as-rhetorical-proof">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[What the Ancient Art of Organized Thinking Says About AI Hallucinations]]></title><description><![CDATA[Deep Reading, Episode 6]]></description><link>https://www.isophist.com/p/what-the-ancient-art-of-organized</link><guid isPermaLink="false">https://www.isophist.com/p/what-the-ancient-art-of-organized</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Mon, 08 Dec 2025 16:12:38 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181053059/f34a92139ebbd45ad19bd3e913840794.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Welcome to Deep Reading. I&#8217;m Lance Cummings from Cyborgs Writing, and today we&#8217;re exploring a question that might sound simple but the deeper you dig, the more complicated it gets. </p><p><strong>What happens when AI gets &#8220;confused&#8221;?</strong></p><p>I recently discovered a metric called <em>semantic entropy</em>. </p><p>Before your eyes glaze over at the word &#8220;entropy,&#8221; let me explain why its important.</p><p><strong>Semantic entropy measures how much an AI&#8217;s responses vary in meaning when you ask it the same question multiple times.</strong> </p><p>High entropy means the model generates different meanings each attempt&#8212;it doesn&#8217;t have stable knowledge, so it improvises. Low entropy means consistent responses.</p><p>This is one reason why AI hallucinates.</p><p>For this podcast, I&#8217;m going to try to bring this concept down to earth and make it actionable through the eyes of ancient rhetoric.</p><p>From an ancient rhetoric perspective, high semantic entropy is when your AI model is walking through a house with no rooms.</p><p>Let me explain.</p><p><em>For those reading, this is a transcript of the podcast, which can be listened to above or in your favorite podcast player.</em></p><h2>Recent research into semantic entropy</h2><p>A few months ago, a paper came out in <em>Nature</em> called <a href="https://www.nature.com/articles/s41586-024-07421-0">&#8220;Detecting hallucinations in large language models using semantic entropy.&#8221;</a> </p><p>They had developed and refined a way to measure when AI seems confused, even when it sounds confident.</p><p>Here&#8217;s how it works. </p><p>You ask an AI the same question multiple times. For example, &#8220;What are the installation steps?&#8221; </p><p>And you get back five different answers. Now, those answers might use different words, but do they <em>mean</em> the same thing?</p><p>If  answer one says &#8220;First, power down the system&#8221; and answer two says &#8220;Begin by turning off power&#8221;&#8212;that&#8217;s low semantic entropy. Different words, same meaning. The AI got the response right.</p><p>But if answer one says &#8220;power down first&#8221; and answer three says &#8220;leave power on during installation,&#8221; then you&#8217;ve got high semantic entropy. The meanings contradict, and the AI is improvising. It probably isn&#8217;t building its answer on solid information.</p><p>This happens even when the AI seems confident. It&#8217;s not hedging with &#8220;maybe&#8221; or &#8220;possibly.&#8221; It&#8217;s just... making stuff up to fill the gap.</p><p>The researchers showed that semantic entropy can predict hallucinations with pretty good accuracy. When entropy is high, you&#8217;re about to get unreliable information.</p><h2>Why this matters</h2><p>Now, you might be thinking, &#8220;Okay, that&#8217;s interesting from a computer science perspective. But I&#8217;m a writer, a professor, a content developer. What does this have to do with me?&#8221;</p><p>Everything.</p><p>Because while this semantic entropy research is newer, a broader principle has been established across other studies: <strong>how you structure source content directly affects AI performance.</strong></p><p><a href="https://www.isophist.com/p/what-is-rag-no-really-what-is-it">Research on RAG systems</a>, or the technology most organizations use for AI-powered search and question-answering, shows that chunking strategy can impact performance as much as or more than the choice of AI model itself.</p><p>Think about what causes high entropy. The AI generates variable meanings because it doesn&#8217;t have stable grounding in what the source material actually says. In a way, it&#8217;s uncertain or guessing.</p><p>And what causes that &#8220;uncertainty&#8221;? The research suggests it&#8217;s often the source material. When documents are poorly organized, the AI does what a confused human reader would do. It fills gaps. Makes assumptions. Creates different interpretations.</p><p>The semantic entropy metric gives us a way to measure this instability. But the underlying principle isn&#8217;t new: structure matters for machine comprehension just like it matters for human comprehension.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">By the way, I&#8217;ll be digging into this even more for paid subscribers. Consider supporting this work and get access to upcoming tests.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#8202;I should add a note here. The AI model isn&#8217;t actually getting uncertain and that&#8217;s really part of the problem. The knowledge it&#8217;s working from is uncertain, but the AI is trained to be confident, and in the end, that&#8217;s what causes semantic entropy.</p><p>So you need confident information or knowledge behind your model to match the actual training to be confident.</p><h2>Why Ancient rhetoric is still important</h2><p>This problem isn&#8217;t new.</p><p>Ancient rhetoricians figured this out thousands of years ago.</p><p>They had to create speeches on the fly. In the Assembly, in the courts, at public ceremonies. No time to prepare. </p><p>Just, &#8220;here&#8217;s your topic, now speak.&#8221;</p><p>How did they do it? They used something called <em>topoi</em>.</p><p>The word literally means &#8220;places&#8221; or &#8220;rooms.&#8221; They organized their knowledge like a house with clearly labeled rooms.</p><ul><li><p>Need to define something? Go to the definition room. </p></li><li><p>Need to compare two things? The comparison room. </p></li><li><p>Need to trace cause and effect? That room.</p></li></ul><p>Having these stable mental spaces, or patterns, meant they could reliably find what they needed and construct coherent arguments quickly.</p><p>In 1984, Carolyn Miller wrote what became one of the most cited paper in rhetorical studies, called<a href="https://www.researchgate.net/publication/238749675_Genre_as_Social_Action"> &#8220;Genre as Social Action.&#8221; </a>And she argued that this is how all communication works. We recognize recurrent situations, and we reach for typified patterns of response.</p><p>When the situation recurs in recognizable form, we know what to do. We have stable knowledge structures to draw from.</p><p>When it doesn&#8217;t, we improvise. We hedge. We contradict ourselves across attempts.</p><h2>Topoi in machine rhetorics</h2><p>High semantic entropy is the computational version of lacking stable topoi.</p><p>When you ask an AI the same question multiple times and get semantically different answers, the model is doing exactly what a rhetor without proper topoi would do. It&#8217;s improvising under uncertainty. It lacks the organizational patterns, or the &#8220;rooms,&#8221; where specific types of knowledge reliably live.</p><p><strong>But &#8230; you can create those rooms through content structure.</strong></p><p>When you write a procedure with clear steps, properly labeled, you&#8217;re creating the &#8220;procedure room.&#8221;</p><p>When you write a concept explanation with a definition, characteristics, and examples in consistent order, you&#8217;re creating the &#8220;concept room.&#8221;</p><p>When you use consistent terminology throughout, you&#8217;re making sure the rooms have clear labels.</p><p>This is what structured content does. It is creating stable topoi for machines.</p><p>Low semantic entropy means the AI knows which room it&#8217;s in and what that room contains. It&#8217;s not guessing. It has reliable patterns to draw from.</p><h2>What does this mean for you?</h2><p>So what do you do with this?</p><p>First, understand that structure isn&#8217;t just about making content look organized. Structure is a signal. It&#8217;s how you communicate to both humans and machines.</p><p><strong>&#8220;This is what kind of information this is, and here&#8217;s how to use it.&#8221;</strong></p><p>Second, recognize that the same principles that help human readers help AI systems. Clear headings. Focused chunks. Consistent terminology. Explicit organization. </p><p>Third, start thinking of yourself not just as a writer but as an information designer. Your job isn&#8217;t just to explain things clearly. It&#8217;s to create reliable knowledge structures that work across contexts, including computational ones.</p><p>The content professional who understands this is going to be incredibly valuable as AI becomes more central to how information gets used.</p><h2>Challenge</h2><p>So here&#8217;s my challenge to you: Next time you&#8217;re creating content, ask yourself: <em>Am I building a house with clearly labeled rooms?</em> Or am I creating an unmarked space where readers, human or machine, have to guess what goes where?</p><p>Because high semantic entropy isn&#8217;t just an AI problem. It&#8217;s a content problem.</p><p>And content problems? Those are solvable.</p><p>Are you wondering how we might test for semantic entropy. Well, stay tuned. More on that soon!</p><p>Until then, I&#8217;m Lance Cummings. Keep reading deeply.</p><p>And if you&#8217;re testing this stuff in your own work, I want to hear about it. Find me on LinkedIn or drop a comment on the newsletter.</p><p>Talk to you next time.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/what-the-ancient-art-of-organized/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/what-the-ancient-art-of-organized/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Is Structured Prompting Dead?]]></title><description><![CDATA[Exploring what happens when testing prompt format]]></description><link>https://www.isophist.com/p/is-structured-prompting-dead</link><guid isPermaLink="false">https://www.isophist.com/p/is-structured-prompting-dead</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 25 Nov 2025 14:14:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hyYi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hyYi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hyYi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 424w, https://substackcdn.com/image/fetch/$s_!hyYi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 848w, https://substackcdn.com/image/fetch/$s_!hyYi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 1272w, https://substackcdn.com/image/fetch/$s_!hyYi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hyYi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png" width="1280" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/372342b3-21b0-41eb-b621-4088526b1225_1280x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:508394,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/179864389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hyYi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 424w, https://substackcdn.com/image/fetch/$s_!hyYi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 848w, https://substackcdn.com/image/fetch/$s_!hyYi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 1272w, https://substackcdn.com/image/fetch/$s_!hyYi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F372342b3-21b0-41eb-b621-4088526b1225_1280x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image created by <a href="https://try.gamma.app/ka5vvp4ov8sj">Gamma.AI</a></figcaption></figure></div><p>&#8220;Structured prompting is dead.&#8221;</p><p>The proclamation came through my feeds like so many others.  </p><p>Yet another casualty of AI&#8217;s rapid evolution. As models get smarter, the argument goes, we no longer need to carefully design our instructions. </p><p>&#8220;Just talk naturally. The AI will figure it out.&#8221;</p><p>Like so many claims like this, its not been tested publicly much (partly because its pretty difficult when you get into the nitty gritty). While some have researched AI outputs (<a href="https://arxiv.org/pdf/2411.10541">like this Microsoft study</a>), we still lack a clear understanding of <em>why</em> different structures create such different outcomes. </p><p>I&#8217;ve been using structured prompts exclusively in my work&#8212;for teaching, for content generation, and for the AI writing tools I build with students. </p><div class="pullquote"><p>How we shape information for AI isn&#8217;t just about getting better outputs or faster processing. It&#8217;s about choosing your <strong>rhetorical stance</strong> in a human-machine collaboration.</p></div><p>Yes, its because they&#8217;ve always delivered consistent results, but also its much easier for humans to work with well-designed instructions, so why not machines. </p><p>It works, so why bother testing. Well, this discussion about the death of prompting got me wondering.</p><p>What am I missing by not comparing approaches? Am I clinging to unnecessary complexity while everyone else has moved on to conversational simplicity?</p><p>So I ran an experiment. </p><p>Not to defend structured prompting, but to understand what structure actually does&#8212;to processing time, to cost, to output behavior. </p><p>The results tell a story about information design that goes beyond prompt engineering. </p><p>How we shape information for AI isn&#8217;t just about getting better outputs or faster processing. It&#8217;s about choosing your <strong>rhetorical stance</strong> in a human-machine collaboration.</p><div class="pullquote"><p>The format becomes a cue that shapes not just what the AI produces, but how it understands its role in the conversation.</p></div><p>When I say &#8220;rhetorical stance,&#8221; I mean the relationship and role that gets established between speaker and audience through how something is communicated. </p><p>Think of it like this: the same information delivered in a legal contract, a friendly email, and a technical manual creates fundamentally different relationships between writer and reader. </p><ul><li><p>The legal contract positions the writer as an authority establishing binding terms.</p></li><li><p>The friendly email creates a peer-to-peer collaboration. </p></li><li><p>The technical manual sets up an instructor-student dynamic. </p></li></ul><p>With AI, prompt format works the same way by signaling to the model what kind of interaction this is. </p><p>JSON says &#8220;we&#8217;re doing technical documentation now.&#8221; Natural language says &#8220;we&#8217;re having a focused discussion.&#8221; Unstructured rambling says &#8220;let&#8217;s explore ideas together.&#8221; </p><p>The format becomes a cue that shapes not just what the AI produces, but how it understands its role in the conversation.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Paid subscribers will get beta access to my new course, Writing with Machines, that incorporates these findings and more. Coming soon!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>How I Performed the Test</h2><p>I took <a href="https://www.isophist.com/p/prompt-9-synthesizing-microcontent">a prompt</a> I&#8217;d developed for the <a href="https://wacclearinghouse.org/repository/collections/writing-studies-prompt-library/">WAC Clearinghouse prompt library</a> (revised version coming soon)&#8212;a complex instruction set for helping student writers analyze their microessays through rhetorical appeals. </p><p>This wasn&#8217;t a simple &#8220;write me a paragraph&#8221; task. It required the AI to perform multi-layered analysis: identify patterns across texts, develop ethos strategies, create emotional connections, and suggest logical organization. The kind of sophisticated content work that professionals do every day.</p><p>Using <a href="https://www.promptlayer.com/">PromptLayer&#8217;s</a> testing tools, I ran four variations in a single session to ensure clean comparisons:</p><p>1&#65039;&#8419; <strong>The structured prompt with semantic tags. </strong>My original version with clear sections like [ROLE], [CONTEXT], [TASK], and specific subsections for different types of analysis. </p><p>2&#65039;&#8419; <strong>The structured prompt with tags removed.</strong> All the organization remained&#8212;the logical flow, the clear sections&#8212;but without the explicit XML-style markers. </p><p>3&#65039;&#8419; <strong>An unstructured version</strong>. I asked Claude to rewrite the same prompt as if a student were explaining the task conversationally. Same requirements, same goals, but delivered as natural language.</p><p>4&#65039;&#8419; <strong>A JSON version. </strong>After a LinkedIn commenter asked if I meant JSON when I said &#8220;structured,&#8221; I converted the entire prompt to JSON format&#8212;pure data syntax with nested objects and arrays.</p><p>Each test tracked three metrics: <strong>processing time, cost, and output token count</strong>. I also compared outputs to see if there were any rhetorical comparisons to be made.</p><p>While I think the results were enlightening, I should be clear that this is an imperfect test. Other factors like how busy a server or API is can influence these statistics. But I think it is good enough to make some observations.</p><h2>What the Numbers Revealed</h2><p>The patterns were striking, though not for the reasons I initially thought.</p><p><strong>The structured prompt without semantic tags completed in 64 seconds producing 598 output tokens. </strong>This provided a baseline that was pretty clean and efficient.</p><p><strong>Adding semantic tags increased time to 83 seconds with 711 output tokens.</strong> My theory is that the tags acted as &#8220;expansion cues,&#8221; signaling that each section deserved elaboration.</p><p><strong>The JSON format took 97 seconds and generated 812 tokens.</strong> But further research shows that it wasn&#8217;t struggling to process JSON. </p><p>The format triggered what researchers call <strong>&#8220;<a href="https://blog.promptlayer.com/is-json-prompting-a-good-strategy/">technical documentation mode</a>&#8221;</strong> that pattern matches more technical genres, producing exhaustive and mechanical prose.</p><p><strong>The unstructured version was slowest at 120 seconds with over 1,000 output tokens&#8212;nearly double the baseline. </strong>My theory is that the conversational prompt triggered exploratory generation, treating the task as an invitation to discuss rather than execute.</p><p>Initially, I assumed the timing differences reflected processing difficulty&#8212;surely JSON must be harder for the model to parse. But the math tells a different story. <a href="https://blog.promptlayer.com/is-json-prompting-a-good-strategy/">In language models, input processing happens in parallel (~0.24ms per token), while output generation is sequential and 20-400x slower per token.</a></p><p>Those 214 extra tokens JSON produced account for most of the 33-second timing difference. <strong>The model wasn&#8217;t struggling to read JSON&#8212;it was taking longer because JSON triggered a more verbose response mode.</strong></p><p>This actually makes the finding more interesting. It&#8217;s not about computational efficiency but about rhetorical stance. Different formats don&#8217;t just organize information differently. They cue fundamentally different generation patterns:</p><ul><li><p> JSON: Triggers technical documentation mode (exhaustive, formal, mechanical) </p></li><li><p>Natural structure: Activates focused communication patterns </p></li><li><p>Semantic tags: Signals &#8220;elaborate on each section&#8221; behavior </p></li><li><p>Unstructured: Invites exploratory discussion</p></li></ul><p>The character of the output revealed even more than the metrics. The JSON response read like a compliance document&#8212;numbered sections, technical language, etc. </p><p>After completing my tests, I discovered that <a href="https://arxiv.org/pdf/2411.10541">Microsoft researchers</a> had recently published similar findings, documenting performance variations up to 40% based solely on prompt format across multiple GPT models and benchmarks.</p><p>Their comprehensive study validates how prompt format significantly affects model performance, with no universal optimal format even within the same model family.</p><p><strong>That&#8217;s to say &#8230; Prompt format is rhetorical!</strong></p><p>Their research confirms several key points:</p><ul><li><p>GPT-3.5 models showed dramatic performance swings&#8212;in some cases over 200% improvement when switching formats</p></li><li><p>Different models prefer different formats (GPT-3.5 favoring JSON, GPT-4 favoring Markdown)</p></li><li><p>Larger models like GPT-4 demonstrate greater resilience to format changes, though notably not immunity</p></li></ul><p>So my findings aren&#8217;t necessarily quirks of my specific test prompt or isolated anomalies. The pattern is systemic across tasks and models.</p><p>However, what the Microsoft research doesn&#8217;t address is <em>why</em> these patterns exist. They document the what but not the why. </p><p>The progression from natural structure to mechanical syntax reflects a fundamental principle about human-AI communication.</p><h2>Why Structure Behaves Like Rhetoric</h2><p>Structure doesn&#8217;t just organize information; it establishes the collaborative stance between human and AI.</p><p>Optimal performance comes from matching structure to the rhetorical nature of the task. Natural structured writing processes fastest because it aligns with how the model was trained to understand human communication.</p><p>When we provide semantic tags, we&#8217;re creating what I call &#8220;expansion cues.&#8221; The model sees [ETHOS STRATEGY] and understands this as a space requiring elaboration. It sees [TASK] and knows to provide comprehensive detail. Tags act like rhetorical zones that encourage certain types of discourse.</p><p>JSON creates an entirely different rhetorical space that focues on compliance and formality. This activates what researchers identify as &#8220;technical mode,&#8221; producing exhaustive, formal outputs. The 2.3x tokenization penalty of JSON (all those brackets, quotes, and repeated keys) compounds the effect, consuming attention budget without adding semantic value.</p><p>Each format implies a different relationship between human and machine. </p><ul><li><p>Natural language says: &#8220;Let&#8217;s communicate clearly&#8221; </p></li><li><p>JSON says: &#8220;Complete this technical specification&#8221; </p></li><li><p>Semantic tags say: &#8220;Document this thoroughly&#8221; </p></li><li><p>Unstructured says: &#8220;Let&#8217;s explore this together&#8221;</p></li></ul><p>The irony? The &#8220;most machine-readable&#8221; format (JSON) produces the least useful output for human consumption, while natural human organization produces the most efficient machine behavior.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TIaU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TIaU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 424w, https://substackcdn.com/image/fetch/$s_!TIaU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 848w, https://substackcdn.com/image/fetch/$s_!TIaU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 1272w, https://substackcdn.com/image/fetch/$s_!TIaU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TIaU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png" width="1456" height="1296" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1296,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:313762,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/179864389?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TIaU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 424w, https://substackcdn.com/image/fetch/$s_!TIaU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 848w, https://substackcdn.com/image/fetch/$s_!TIaU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 1272w, https://substackcdn.com/image/fetch/$s_!TIaU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F612eb7c3-4fc9-4d68-92fb-be461838a4a7_1506x1340.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This connects directly to my research on chunk size in RAG systems. Just as different chunk sizes serve different types of questions (64 tokens for facts, 1024 for complex reasoning), different prompt structures serve different collaborative needs. </p><p>The design isn&#8217;t just technical&#8212;it&#8217;s rhetorical. We&#8217;re not optimizing for machines. We&#8217;re designing collaborative spaces.</p><h2>The Scale Mathematics</h2><p>For a single prompt, these differences might seem academic. An extra 56 seconds here, three-tenths of a cent there. But content operations don&#8217;t run on single prompts. They run on thousands, tens of thousands, millions of interactions.</p><p>So let&#8217;s make this concrete. Say you&#8217;re running a content operation that processes 1,000 prompts daily&#8212;not unusual for automated content generation, customer service, or educational applications. </p><p>Switching from unstructured to naturally structured prompts saves 56 seconds per query. That&#8217;s 15.5 hours of processing time per day&#8212;nearly two full workdays. Over a month, you&#8217;re looking at 465 hours of saved processing time. </p><p>The cost compounds too. While individual prompt costs seem negligible, at scale the differences matter. Those additional 400+ tokens per response in unstructured prompts mean 40% more data volume. Your storage costs increase. Your analysis tools work harder. Your editors spend more time trimming verbosity.</p><p>But the real cost isn&#8217;t measurable in dollars or seconds. It&#8217;s in variance and character. Unstructured prompts produce unpredictable outputs&#8212;sometimes brilliant, sometimes meandering, always different. JSON produces consistent but mechanical prose&#8212;reliable but soulless. </p><p>When you&#8217;re building content operations, you need to choose not just your efficiency point but your rhetorical stance. Do you want exploratory collaboration, professional documentation, efficient communication, or bureaucratic compliance?</p><h2>A Framework for Prompt Design Decisions</h2><p>After running these tests and analyzing the patterns, I&#8217;ve developed a decision framework that treats prompt structure as a rhetorical choice rather than a technical optimization.</p><p>Start with output shape, not input format. Before writing any prompt, define exactly what you need: A three-paragraph summary? A five-point action plan? A detailed analysis with specific sections? Your output requirements should drive your structural choices, not the other way around.</p><p>Next, identify your rhetorical needs. What kind of collaboration do you want with the AI?</p><ul><li><p><strong>Natural structure (no tags)</strong>: For efficient, focused communication. When you need speed and clarity without elaborate formatting.</p></li><li><p><strong>Semantic tags</strong>: For professional documentation requiring consistent sections. When repeatability and scannable output matter more than speed.</p></li><li><p><strong>JSON</strong>: For data transformation tasks or when you need absolute consistency, accepting the trade-off of mechanical prose and slower processing.</p></li><li><p><strong>Unstructured</strong>: For exploration, ideation, or when you want the AI to think expansively about possibilities.</p></li></ul><p>Then assess your scale needs. Running ten prompts a day? The performance differences might not matter. Running ten thousand? Every second and cent compounds. An 88% performance difference between best and worst approaches becomes operationally critical at scale.</p><div class="pullquote"><p>The death of structured prompting has been greatly exaggerated. What&#8217;s actually dying is the simplistic binary of &#8220;structured versus unstructured.&#8221; The reality, as these tests reveal, is a spectrum of rhetorical choices, each creating different collaborative dynamics with AI.</p></div><p>Consider your variance tolerance. If you need consistent, repeatable outputs&#8212;think templates, reports, standardized responses&#8212;structure is non-negotiable. The tighter your structure, the lower your variance. But remember: JSON&#8217;s consistency comes with a rhetorical cost. You get reliability but lose voice.</p><p>For semantic tags specifically, I&#8217;ve found they work best when you need scannable, sectioned outputs that others will consume&#8212;documentation, lesson plans, reports. They&#8217;re rhetorical markers that say &#8220;this content has formal zones.&#8221; But they also encourage elaboration, adding processing time and output length.</p><p>Skip tags when you need conversational brevity&#8212;emails, status updates, quick summaries. The model treats unmarked structure as a cue for flowing prose rather than sectioned content. You get the benefits of logical organization without the expansion behavior that tags trigger.</p><p>Avoid JSON unless you explicitly need data structure output or are willing to accept bureaucratic prose for absolute consistency. The processing overhead and rhetorical stance rarely justify its use for content generation tasks.</p><h2>Where Prompt Engineering Is Actually Heading</h2><p>The death of structured prompting has been greatly exaggerated. What&#8217;s actually dying is the simplistic binary of &#8220;structured versus unstructured.&#8221; The reality, as these tests reveal, is a spectrum of rhetorical choices, each creating different collaborative dynamics with AI.</p><p>What we&#8217;re witnessing isn&#8217;t the evolution of prompt engineering as a technical skill. It&#8217;s the emergence of a new literacy&#8212;one where writers understand not just how to communicate with humans, but how to design information for collaborative intelligence. </p><p>Where content professionals design not just for reading, but for processing. </p><p>Where rhetoric extends beyond human audiences to include artificial ones.</p><p>The future is less about structured vs. unstructured and more about developing rhetorical awareness, or understanding how different information architectures create different collaborative possibilities with AI systems. </p><p>Sometimes you need the precision of semantic tags for repeatable documentation. </p><p>Sometimes you need the efficiency of natural structure for rapid processing. </p><p>Sometimes you need the exploration of conversational prompting for ideation. </p><p>And yes, sometimes you might even need JSON&#8217;s mechanical consistency for pure data transformation.</p><p>Understanding structure as rhetoric&#8212;not just formatting&#8212;becomes essential as AI systems become more central to content work.</p><p>This isn&#8217;t the end of structured prompting. It&#8217;s the beginning of rhetorical information design.</p><div><hr></div><p><em>What&#8217;s your experience with prompt structure? Have you noticed different AI behaviors with different formatting approaches? I&#8217;d particularly love to see tests comparing other structural formats&#8212;Markdown, YAML, or even programming language syntax. Drop your observations in the comments, or better yet, run your own tests and share the data. Together, we can map the full spectrum of rhetorical possibilities in human-AI collaboration.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/is-structured-prompting-dead/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/is-structured-prompting-dead/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Vibe Coding Isn't About Vibes]]></title><description><![CDATA[It's rhetoric & knowledge]]></description><link>https://www.isophist.com/p/vibe-coding-isnt-about-vibes</link><guid isPermaLink="false">https://www.isophist.com/p/vibe-coding-isnt-about-vibes</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Mon, 10 Nov 2025 16:00:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CN-f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CN-f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CN-f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 424w, https://substackcdn.com/image/fetch/$s_!CN-f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 848w, https://substackcdn.com/image/fetch/$s_!CN-f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 1272w, https://substackcdn.com/image/fetch/$s_!CN-f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CN-f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png" width="1280" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1243279,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/178500768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85c24867-7c6c-42fc-a5dd-3a9eb3946a37_1280x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CN-f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 424w, https://substackcdn.com/image/fetch/$s_!CN-f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 848w, https://substackcdn.com/image/fetch/$s_!CN-f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 1272w, https://substackcdn.com/image/fetch/$s_!CN-f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55dba006-75cb-4dd5-8aaa-e3544885fb86_1280x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Created with <a href="https://try.gamma.app/ka5vvp4ov8sj">Gamma.AI</a></figcaption></figure></div><p>This semester I&#8217;ve been frustrated by the inadequacy of institutional tools for doing the kind of writing I want to do in the classroom.</p><p>The biggest problem? Tracking student work without grading.</p><p>I don&#8217;t give grades in my writing courses because they only get in the way of learning to write. When students face a grade, they stop experimenting. </p><p>They play it safe and avoid the kind of risk-taking that actually develops writing ability. Experimentation is key to becoming a good writer, and students are less likely to experiment with a grade on the line.</p><p>This year I&#8217;m taking a &#8220;writing gym&#8221; approach with my essay course. The concept is simple: treat writing like strength training. You build writing ability through consistent daily practice, not occasional high-stakes performances. </p><p>Students:</p><ul><li><p>track daily word counts</p></li><li><p>work in three-person accountability pods, and </p></li><li><p>focus on showing up rather than producing perfect drafts. </p></li></ul><p>Complete or not complete? Did you do your practice or not?</p><p>But learning management systems are built for grades. They&#8217;re designed around point systems, percentage calculations, and assignment submissions. They make tracking daily practice without numerical judgment nearly impossible.</p><p>So at the start of the semester, we used a tool called <a href="https://750words.com/">750 Words</a>, which was perfect for this approach. </p><p>This tool encourages writers to write 750 words every day with a simple interface that tracks and rewards regular writing. It&#8217;s a great tool &#8230; You should check it out.</p><p>Then halfway through the term, I discovered they no longer offer a free version. I can&#8217;t make students pay for a required tool after the semester has started, and I certainly can&#8217;t afford to pay for 25 students myself.</p><p>So I thought: Why not build my own tool?</p><p>As most of you know, I&#8217;ve been experimenting with using instructional chatbots in the classroom. More recently, I&#8217;ve been using Claude to develop apps on top of my structured knowledge that can helps students in more focused ways than a chatbot can.</p><p>This seemed like the perfect opportunity to go deeper. </p><p>I&#8217;m currently revising my<a href="https://www.isophist.com/s/prompt-ops"> PromptOps course </a>into something more foundational called <em>Writing with Machines</em>, exploring how structured content and knowledge architecture make AI collaboration actually work. </p><p>Building this tracker would let me test those principles in practice.</p><div class="pullquote"><p>But what are &#8220;vibes&#8221; actually? From a rhetorical perspective, vibes represent an intuitive sense of what works&#8212;a feel for audience, context, and appropriate response that skilled practitioners develop through experience.</p></div><p>Last year I played around with vibe coding tools like Replit and Lovable, but found them fairly limited. They cap the number of interactions you can have daily, and deployment remained complicated for someone without deep technical background.</p><p>After spending three months developing the writing gym pedagogy through conversations with Claude, I realized that the accumulated knowledge turned out to be more valuable than any deployment feature a specialized platform could offer.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Get beta-access to my Writing with Machines course by becoming a paid subscriber. Coming soon!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2>The Strategic Choice</h2><p><a href="https://x.com/karpathy/status/1886192184808149383">Andrej Karpathy</a>, OpenAI co-founder, coined the term &#8220;vibe coding&#8221; to describe developers who &#8220;fully give in to the vibes&#8221; and &#8220;forget that the code even exists&#8221;&#8212;focusing on iterative testing and prompt refinement rather than examining actual implementation. </p><p>But what are &#8220;vibes&#8221; actually? From a rhetorical perspective, vibes represent an intuitive sense of what works&#8212;a feel for audience, context, and appropriate response that skilled practitioners develop through experience. </p><p><strong>The ancient Greeks called this </strong><em><strong>metis</strong></em><strong> &#8230; or cunning intelligence, the ability to navigate complex situations through accumulated practical wisdom rather than formal rules.</strong></p><p>I&#8217;ve written before about metis in <a href="https://www.isophist.com/p/accessibility-first-thinking-in-the">the context of accessibility</a> and <a href="https://generativeai.pub/a66a326a703b">neurodivergent use</a> of AI. Many people have developed sophisticated intuitive strategies for working with AI systems that appear casual but reflect deep understanding of communication patterns. </p><div class="pullquote"><p>But its important to realize that any kind of vibes or cunning intelligence rely on practical wisdom, which emerges from expert knowledge. So you can support your vibes (or your metis) by grounding those vibes in structured thinking about your domain.</p></div><p>The &#8220;vibes&#8221; in vibe coding work the same way. They&#8217;re not mysterious or magical. They&#8217;re accumulated rhetorical knowledge about what prompts work, what contexts matter, how to frame problems effectively.</p><p>But it&#8217;s important to realize that any kind of vibes or cunning intelligence rely on practical wisdom, which emerges from expert knowledge. So you can support your vibes (or your metis) by grounding those vibes in structured thinking about your domain. </p><p>I didn&#8217;t stumble into Claude for this project. I chose it deliberately because that&#8217;s where my knowledge lives. My best vibes work on top of knowledge I&#8217;ve thoughtfully curated.</p><p>For three months before attempting to build anything, I&#8217;d been developing my &#8220;writing gym&#8221; pedagogy through conversations with Claude:</p><ul><li><p>Working out how students track progress,</p></li><li><p>Testing why pod-based collaboration creates accountability,</p></li><li><p>Refining what &#8220;complete/incomplete&#8221; assessment means for student motivation, and</p></li><li><p>Exploring how fitness metaphors reframe writing as daily practice rather than occasional performance.</p></li></ul><p>Each conversation added to a growing knowledge base that Claude could access and build upon.</p><p>When I finally said &#8220;help me design an app for this,&#8221; Claude was able to generate interfaces that reflected my pedagogical philosophy without much work.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v_PX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v_PX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 424w, https://substackcdn.com/image/fetch/$s_!v_PX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 848w, https://substackcdn.com/image/fetch/$s_!v_PX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 1272w, https://substackcdn.com/image/fetch/$s_!v_PX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v_PX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png" width="1587" height="1587" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1587,&quot;width&quot;:1587,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:420913,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/178500768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031274de-e59a-421d-bdb0-44953a5dc67d_1916x2054.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v_PX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 424w, https://substackcdn.com/image/fetch/$s_!v_PX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 848w, https://substackcdn.com/image/fetch/$s_!v_PX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 1272w, https://substackcdn.com/image/fetch/$s_!v_PX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb17832f9-5c59-41d4-b613-8414443b423c_1587x1587.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Main submission interface - &#8220;Build your writing muscles, one rep at a time!&#8221;</strong>...</figcaption></figure></div><p>In the main submission interface, the tagline alone&#8212;&#8221;Build your writing muscles, one rep at a time!&#8221;&#8212;captures the entire pedagogy without me ever explicitly saying &#8220;use fitness metaphors throughout.&#8221; </p><p>Also notice the reminder at the bottom: &#8220;Remember: Authentic practice builds real strength.&#8221; I never wrote that copy. Claude generated it based on our conversations about how consistency matters more than perfection and how authentic practice builds capability.</p><p>The interface doesn&#8217;t save the actual writing, only the word count. This wasn&#8217;t a technical limitation, but a pedagogical choice that emerged from our discussions about removing evaluation anxiety. Students can practice freely knowing the text disappears, but their commitment to showing up gets tracked.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zYdK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zYdK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 424w, https://substackcdn.com/image/fetch/$s_!zYdK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 848w, https://substackcdn.com/image/fetch/$s_!zYdK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 1272w, https://substackcdn.com/image/fetch/$s_!zYdK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zYdK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png" width="1920" height="1455" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1455,&quot;width&quot;:1920,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:490423,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/178500768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d188e42-7b64-497f-ac8f-f779361723fd_1920x2060.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zYdK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 424w, https://substackcdn.com/image/fetch/$s_!zYdK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 848w, https://substackcdn.com/image/fetch/$s_!zYdK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 1272w, https://substackcdn.com/image/fetch/$s_!zYdK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50364f81-5d99-4aee-b4e8-a050b41ffa30_1920x1455.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Leaderboard with Individual/Pods/Achievements tabs</strong>.</figcaption></figure></div><p>The leaderboard structure reflects our conversations about different types of motivation: </p><ul><li><p>individual tracking</p></li><li><p>pod-based collaboration, and </p></li><li><p>achievement systems.</p></li></ul><p>Notice the red flag indicator: &#8220;3+ days without writing.&#8221; Not punitive, not grade-based, just a visible accountability signal that emerged from discussions about how to maintain consistency without shame.</p><p>The empty state copy continues the fitness metaphor throughout the entire experience:  &#8220;No writers in the gym yet! Be the first to start building your writing muscles."</p><p>This wasn&#8217;t me writing every piece of microcopy. This was Claude understanding the tonal consistency required by the pedagogical framework.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-tcn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-tcn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 424w, https://substackcdn.com/image/fetch/$s_!-tcn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 848w, https://substackcdn.com/image/fetch/$s_!-tcn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 1272w, https://substackcdn.com/image/fetch/$s_!-tcn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-tcn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png" width="1894" height="2050" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2050,&quot;width&quot;:1894,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1183485,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/178500768?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ac7deef-4932-46f2-a51c-3dfd16fe080d_1894x2050.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-tcn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 424w, https://substackcdn.com/image/fetch/$s_!-tcn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 848w, https://substackcdn.com/image/fetch/$s_!-tcn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 1272w, https://substackcdn.com/image/fetch/$s_!-tcn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2bfa198-f278-439f-a55a-88bc81111563_1894x2050.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Achievement badges showing strength, consistency, and growth categories</strong>.</figcaption></figure></div><p></p><p>The badge system demonstrates how Claude structured the gamification badges around the core principles we developed. </p><p>Strength Badges (&#128170;) focus on cumulative output: Word Builder, Word Warrior, Word Astronaut, Heavy Lifter. </p><p>But notice they&#8217;re not competitive in a harmful way&#8212;they recognize volume while the &#8220;Heavy Lifter&#8221; badge celebrates daily record holders.</p><p>Consistency Badges (&#128293;) prioritize what matters most in the writing gym: showing up. Week Warrior, Diamond Hands, Iron Writer, Perfect Week. </p><p><strong>Now, honestly, I do think these metaphors could use some work. For example, I&#8217;m not so sure about warrior metaphors or diamond hands &#8230; But that&#8217;s easy to change.</strong></p><p>These names didn&#8217;t come from me listing badge options, but rather from Claude understanding that consistency matters more than perfection in building writing capability.</p><p>The categorization itself (Strength, Consistency, Growth) maps directly to our pedagogical conversations about what writers actually need to develop. I never handed Claude a badge taxonomy. The structure emerged from accumulated knowledge about how writing skill develops.</p><p>This is what working with pre-structured knowledge looks like. Not prompting from scratch, but building from a foundation of organized thinking.</p><h2>The Bridge to Implementation</h2><p>After developing the app concept in Claude, I faced reality: Claude can&#8217;t deploy backends. Any app that needs to work for multiple users on multiple devices requires a database, authentication, and other security measures.</p><p>I needed to move to a platform with deployment capabilities. But I didn&#8217;t have to start from scratch.</p><div class="pullquote"><p>I learned something new in this process. Paying for the platform where you build and structure knowledge can save significant money when you deploy to constrained platforms. <strong>The upfront investment in organization returns dividends when you transfer that knowledge efficiently.</strong></p></div><p>I had Claude generate a comprehensive prompt for Lovable that captured our entire design process, using my structured prompt principles.</p><p><strong>Below are a few excerpts. Paid subscribers can find the <a href="https://www.isophist.com/p/prompt-10-the-writing-gym-tracker">full prompt here</a>.</strong></p><pre><code><code>[CONTEXT] I&#8217;m building an educational web application to help students 
develop consistent writing habits in a classroom setting. The concept 
treats writing practice like fitness training&#8212;students track their 
&#8220;gains&#8221; and compete on a leaderboard to stay motivated. Teachers need 
a simple way to monitor participation and total output without reading 
every submission.

...

[GUIDELINES]
Design and Visual Theme: Create a modern fitness-inspired interface 
that feels motivating rather than academic. Use a purple gradient 
background flowing from #667eea to #764ba2. 

Messaging: Use fitness-themed encouragement throughout, such as &#8220;Build 
your writing muscles, one rep at a time!&#8221; Provide positive feedback 
after submissions. Keep the interface clean and distraction-free so 
students focus on writing rather than navigating complex menus.

...

[CONSTRAINTS]
* Store student information including names, pods, word counts, and 
  submission timestamps in the database
* The leaderboard must update in real-time when any student submits 
  new writing
* When they submit, save only the word count to the database (not the 
  actual text content)
</code></code></pre><p><strong>The decision to save word counts but not actual text</strong> isn&#8217;t because we can&#8217;t build that kind of app. It was a pedagogical choice that emerged from our conversations about removing evaluation anxiety. </p><p><strong>The fitness-themed encouragement</strong> is a result of the metaphorical framework for reframing writing practice embedded in the prompt. </p><p><strong>The real-time leaderboard updates</strong> reflect discussions about visibility and motivation in collaborative learning environments.</p><p>This prompt became the bridge between platforms. It packaged several months of structured thinking into a transferable format. Lovable could deploy what Claude had conceptualized because the knowledge was already organized, structured, and ready to transfer.</p><p>The prompt was a pedagogical philosophy translated into implementation requirements through expert knowledge and practical wisdom. Every design decision, every constraint, every interface element traced back to structured knowledge about how students learn to write.</p><p><strong>Here&#8217;s the economic reality that makes this approach practical:</strong> developing that comprehensive prompt in Claude (where I pay for a subscription) allowed me to create a working app within Lovable&#8217;s free interaction limits. </p><p>Instead of burning through dozens of back-and-forth iterations trying to explain my vision from scratch, I had a single, well-structured prompt that captured everything.</p><p>The investment in building structured knowledge in Claude paid for itself immediately. Without that foundation, I would have exceeded Lovable&#8217;s free tier trying to communicate context, clarify requirements, and refine the design through iterative prompting. </p><p>I learned something new in this process. Paying for the platform where you build and structure knowledge can save significant money when you deploy to constrained platforms. </p><p><strong>The upfront investment in organization returns dividends when you transfer that knowledge efficiently.</strong></p><h2>The Implementation Reality</h2><p>Let me be honest about what happened next. The backend challenges were real and humbling.</p><p>Lovable gave me deployment capabilities, but I still needed to understand database structure, authentication logic, and security policies. </p><p>My app has real limitations: </p><ul><li><p>security gaps I&#8217;m still closing, </p></li><li><p>password reset mechanisms, and</p></li><li><p>database authentication.</p></li></ul><p>For my use case, these limitations are acceptable. We are just sharing word counts and friendly competition &#8230; Nothing high-stakes. This probably wouldn&#8217;t work for a lot of educational use cases.</p><p>But it go me thinking about things &#8220;vibe-coders&#8221; don&#8217;t tell you and why this asymmetry between frontend and backend exists? </p><p><strong>Frontend development offers visual, immediate feedback. </strong>You see results instantly in browsers, making debugging intuitive. Visual errors are obvious. The declarative nature of HTML/CSS and component-based architectures map well to AI training patterns.</p><p>This is great for prototyping ideas fast. That shouldn&#8217;t be discounted, because it is an important part of software development. But if you want to deploy that prototype, it takes a backend.</p><p><strong>Backend errors can corrupt databases, expose sensitive data, or cause catastrophic production failures.</strong> Database management requires optimization techniques that AI struggles to implement contextually. Deployment, infrastructure, real-time synchronization, security, and compliance all require architectural thinking that pattern-matching approaches fundamentally struggle with.</p><p>So it takes considerably longer to turn that prototype into a functional app.</p><p>That said, my app serves its purpose within real constraints. It&#8217;s functional enough for students to track daily writing, participate in pods, and build consistency habits while I continue learning what &#8220;good enough&#8221; actually requires in the world of vibe coding.</p><h2>Start with Knowledge</h2><p>So here is my key takeaway for anyone looking to give vibe-coding a try.</p><p>Start where your knowledge already lives.</p><p>The months I spent structuring my pedagogical thinking in Claude created a foundation that made everything else possible. When I finally needed to build, the knowledge was ready to go.</p><div class="pullquote"><p>The &#8220;vibe&#8221; in vibe coding isn&#8217;t casual or magical. It&#8217;s accumulated expertise that&#8217;s been deliberately structured through systematic thinking and conversation. It&#8217;s the knowledge architecture you build before you ever start prompting for code.</p></div><p>This validates something I&#8217;ve been exploring in my research on AI content systems: pre-structuring knowledge beats post-processing it.</p><p>Whether you&#8217;re preparing content for RAG retrieval systems (<a href="https://www.isophist.com/p/what-is-rag-no-really-what-is-it">which I&#8217;ve been writing about lately</a>), designing for accessibility (<a href="https://www.isophist.com/p/accessibility-first-thinking-in-the">which I explored at the CAKE Conference</a>), or building applications through AI collaboration, the pattern holds. </p><p>The strategic organization of information upfront determines how well AI systems can work with it later.</p><p>You can&#8217;t fix bad structure with better algorithms. </p><p>You can&#8217;t compensate for unorganized knowledge with clever prompts. </p><p>The fitness metaphors, pod structures, assessment philosophy, and motivation principles in my app weren&#8217;t generated through prompt engineering&#8212;they emerged from months of structured conversation that Claude could access, understand, and transfer.</p><p>The &#8220;vibe&#8221; in vibe coding isn&#8217;t casual or magical. It&#8217;s accumulated expertise that&#8217;s been deliberately structured through systematic thinking and conversation. </p><p>It&#8217;s the knowledge architecture you build before you ever start prompting for code. </p><p><strong>This is metis in action&#8212;practical wisdom that appears intuitive but is actually the result of careful organization and accumulated understanding.</strong></p><p>My app worked not because I had perfect prompts, but because I had structured knowledge that let me evaluate what Claude generated, understand why certain design decisions mattered, and transfer that understanding across platforms.</p><p>I&#8217;ve been telling students for years that good writing requires genuine thinking, not just assembling words. </p><p>Turns out AI development works exactly the same way. The quality of what you build depends entirely on the knowledge you bring to the conversation &#8230; and more importantly, on how you&#8217;ve structured that knowledge for retrieval and recombination.</p><h2>Where Are You Building Your Knowledge?</h2><p>The future of AI collaboration isn&#8217;t in finding the perfect platform or crafting the perfect prompt. It&#8217;s in doing your thinking in places where that knowledge can accumulate, be structured, and transfer when needed.</p><p>Where are you doing your knowledge work? </p><p>Is it accumulating somewhere that AI can access and understand? </p><p>Are you structuring it through conversation, documentation, or systematic organization? </p><p>That choice might matter more than any tool decision you make.</p><p>Because when you finally need to build something, you&#8217;ll discover that the knowledge architecture you&#8217;ve been creating all along determines what&#8217;s possible.</p><p>The vibe in vibe coding? It&#8217;s not casual intuition or lucky guessing. </p><p>It&#8217;s metis&#8212;the cunning intelligence that comes from curating your domain expertise so thoroughly that working with AI <em>feels</em> intuitive. </p><p>It&#8217;s rhetorical knowledge that got there first.</p><div><hr></div><p><strong>What&#8217;s your experience been with building knowledge bases for AI collaboration? Where does your expertise live, and can AI systems access it effectively? </strong></p><p><strong>Reply and let me know what you&#8217;re discovering.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/vibe-coding-isnt-about-vibes/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/vibe-coding-isnt-about-vibes/comments"><span>Leave a comment</span></a></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Prompt #10: The Writing Gym Tracker Prompt]]></title><description><![CDATA[Structured prompting for vibe coding]]></description><link>https://www.isophist.com/p/prompt-10-the-writing-gym-tracker</link><guid isPermaLink="false">https://www.isophist.com/p/prompt-10-the-writing-gym-tracker</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 04 Nov 2025 14:06:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jGte!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jGte!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jGte!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!jGte!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!jGte!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!jGte!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jGte!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1529372,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/177920676?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jGte!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!jGte!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!jGte!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!jGte!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9603f958-496b-4de3-95a0-06afdd7cceb4_1200x675.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em><strong>A note for paid subscribers:</strong> You&#8217;re getting early access to this prompt before the main post goes live. This is part of what makes your support so valuable&#8212;you see behind the scenes of how I&#8217;m developing content and experimenting with AI systems.</em></p><p><em>On that note, I&#8217;ve been revising my PromptOps course into something more foundational called Writing with Mac&#8230;</em></p>
      <p>
          <a href="https://www.isophist.com/p/prompt-10-the-writing-gym-tracker">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Beyond Prompts (Director's Cut)]]></title><description><![CDATA[From Structured Prompts to AI-Ready Content]]></description><link>https://www.isophist.com/p/beyond-prompts-directors-cut</link><guid isPermaLink="false">https://www.isophist.com/p/beyond-prompts-directors-cut</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Fri, 24 Oct 2025 17:05:33 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/177028546/f02a173a8c3d3d9ae6b1550ab5499482.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In the last month or so, I&#8217;ve been busy giving presentations on AI-ready content and thought I&#8217;d share the &#8220;director&#8217;s cut,&#8221; which includes things I cut out for time in my last presentation at <a href="https://sigdoc.acm.org/">SIGDOC</a>. </p><p>In these presentations, I move beyond the basics of prompting to discuss what comes next in our evolving relationship with AI systems. </p><p><strong>This presentation captures where my work has been heading over the past year&#8212;from the structured prompting techniques we&#8217;ve explored in <a href="https://www.isophist.com/s/prompt-ops">PromptOps</a> to something much more fundamental: how we architect knowledge itself for human-machine collaboration.</strong></p><div class="pullquote"><p>The prompt hasn&#8217;t disappeared&#8212;it&#8217;s expanded to encompass entire knowledge systems.</p></div><p>While casual users may not need sophisticated prompt techniques, the principles that made structured prompts effective haven&#8217;t disappeared. They&#8217;ve evolved. </p><p>As AI systems gain the ability to work with uploaded content and knowledge bases, we&#8217;re discovering that these same structured principles apply at a larger scale. </p><p><strong>The prompt hasn&#8217;t disappeared&#8212;it&#8217;s expanded to encompass entire knowledge systems.</strong></p><p>In the presentation, I walk through this progression: from breaking prompts into reusable blocks, to organizing those blocks into libraries, to ultimately structuring the content that AI systems work with. </p><p>This mirrors the journey many of us are taking&#8212;moving from prompt engineering to what I&#8217;m calling context engineering.</p><p>I also demonstrate three different approaches to structuring the same information and the results of some simple testing I did to compare outputs. </p><p>When we organize content using microcontent, AI systems retrieve and apply that information far more effectively than when we simply dump unstructured notes into a knowledge base.</p><p><em>This research into AI-ready content is shaping the next iteration of my work. I&#8217;m revising my Writing with Machines course around this exact progression, and paid subscribers will have free access to audit the updated material as it develops. </em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><p>I&#8217;m also developing methods for testing whether content is truly AI-ready&#8212;practical approaches you can use to evaluate your own knowledge systems. More on that coming soon.</p><p>For now, I hope this presentation gives you a broader view of where machine rhetorics is heading. </p><p>The future of rhetoric isn&#8217;t about choosing between humans and machines. It&#8217;s about designing knowledge systems that shape our ways of knowing the world.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/beyond-prompts-directors-cut?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/beyond-prompts-directors-cut?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[What is RAG ... No Really, What is It?]]></title><description><![CDATA[Deep Reading, Episode 5]]></description><link>https://www.isophist.com/p/what-is-rag-no-really-what-is-it</link><guid isPermaLink="false">https://www.isophist.com/p/what-is-rag-no-really-what-is-it</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Fri, 10 Oct 2025 13:33:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/175798045/29150cca8c52144ae1a7e1b6f0ecfd9b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Welcome to Deep Reading. I&#8217;m Lance Cummings from Cyborgs Writing and today I&#8217;m digging into some research on RAG. Many of us (including myself) know vaguely what RAG is &#8230; But details matter.</p><p>I&#8217;ve been doing a lot of research on RAG lately, or retrieval-augmented generation &#8230; I mean really digging into the research papers. If you&#8217;re creating content for AI systems, you need to understand how these systems actually work. </p><p>Not just the marketing version. The real mechanics.</p><p>And here&#8217;s what I&#8217;ve discovered: most explanations of RAG are technically correct but can be misleading. </p><p>They&#8217;ll tell you &#8220;RAG retrieves relevant documents from a database and uses them to generate answers.&#8221; </p><p>True. But that definition hides crucial details that completely change how you should think about creating AI-ready content.</p><p>Today, I want to walk you through what RAG really does, step by step, drawing on the foundational research and recent surveys, because understanding these details will  shift how you approach content structure.</p><p><em>By the way, I&#8217;m revising and re-platforming my course on Writing with Machines. As I work through this redesign, the course will be available for paid subscribers to preview and provide feedback. </em></p><p><em>If you&#8217;re interested in being part of that process, consider subscribing. A separate message will go out to paid subscribers soon.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><h2>The Chunking Revelation</h2><p>Let me start with something that surprised me. <strong>Your RAG system never actually sees your documents.</strong></p><p>Here&#8217;s what happens. </p><p><strong>When you upload a user manual or a knowledge base article to a RAG system, before anyone even asks a question, the system immediately breaks that document into chunks. </strong></p><p><em>For more on chunking, check out my last deep reading.</em></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a5acc412-b931-4504-b30a-fa12b798eaec&quot;,&quot;caption&quot;:&quot;Welcome to Deep Reading. I'm Lance Cummings, and this is where we take short, focused dives into AI research that actually matters for content professionals.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;When AI Research Validates What Content Pros Have Always Known&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:129389476,&quot;name&quot;:&quot;Lance Cummings&quot;,&quot;bio&quot;:&quot;AI Content Specialist &amp; Professor | Exploring how to leverage structured content with rhetorical strategies to improve the performance of generative AI technologies&nbsp;both in the workplace and the classroom.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd589e8cc-4070-4e52-a3e0-82f218982383_3751x5626.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-12T12:24:46.102Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/173298537/02eed4bf-f178-4955-a5c7-298ba3aa59f6/transcoded-1757532087.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.isophist.com/p/when-ai-research-validates-what-content&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:173298537,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:3,&quot;comment_count&quot;:2,&quot;publication_id&quot;:1639524,&quot;publication_name&quot;:&quot;Cyborgs Writing&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!cnci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd41b2ae-512f-4bbc-8ca0-1dc31a7a8641_500x500.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>The 2024 survey by Gao, et al. on RAG for large language models notes that these chunks typically contain 100 to 500 tokens&#8212;roughly 75 to 375 words each. Each chunk gets converted into a numerical representation called an <em>embedding</em> and stored in a database as a completely separate, independent item.</p><p>From that moment forward, the RAG system only retrieves and works with those individual chunks. </p><p>It has zero awareness that chunk 23 and chunk 24 came from the same manual. It doesn&#8217;t know they were originally adjacent. It doesn&#8217;t know they both came from the installation section.</p><p>Think about what this means. Your carefully structured 50-page troubleshooting guide becomes 100 disconnected fragments. The system treats each one like a separate document.</p><p>This is why I say the basic definition is misleading. When people say &#8220;RAG retrieves relevant documents,&#8221; what they really mean is &#8220;RAG retrieves relevant chunks.&#8221; And chunks are not documents. They&#8217;re fragments that the system created, often by just counting words and cutting wherever it hits the limit.</p><p>If you write a procedure and it gets split across three chunks, the system might retrieve chunk one without chunk two, giving an incomplete answer. </p><p>The beautiful document structure you created? Gone the moment it gets chunked.</p><h2>The Multi-Stage Pipeline</h2><p>Now let&#8217;s look at how RAG actually retrieves those chunks, because this is where it gets really interesting.</p><p>The original RAG paper by Lewis, et al. back in 2020 introduced the foundational architecture with the goal of combining what they call <em>parametric memory</em>, which is the model&#8217;s internal knowledge, with <em>non-parametric memory</em>, which is your external database. </p><p>But what&#8217;s evolved since then is how sophisticated the retrieval process has become.</p><p>Most people think of RAG as: search, find, answer. But Gao&#8217;s 2024 survey identifies what they call &#8220;Advanced RAG&#8221; and &#8220;Modular RAG&#8221; approaches that use multi-stage pipelines.</p><p><strong>Stage one is initial retrieval</strong>. When you ask a question, the system searches through all those chunks and pulls back a broad set of candidates&#8212;typically 20 to 100 chunks. </p><p>This stage is fast but imprecise. It&#8217;s like casting a wide net. The system uses hybrid search, combining both keyword matching with algorithms like &#8220;BM25 and semantic similarity&#8221; using &#8220;dense vector embeddings&#8221;. </p><p>In other words,  it&#8217;s looking for your exact words AND for content that means the same thing using different words.</p><p><strong>Stage two is re-ranking</strong>. Now the system gets more careful. Research by Nogueira, et al. on multi-stage document ranking showed that cross-encoders, that jointly encode the query and chunks together, can significantly outperform the initial retrieval. </p><p>This stage takes those 20 to 100 candidates and examines each one more closely. It looks at your question and each chunk together, as a pair, and scores how well they actually match. </p><p>This is computationally expensive but much more accurate. This stage narrows the results down to maybe 2 to 6 chunks&#8212;just the best ones.</p><p><strong>Stage three is generation</strong>. Only those final ranked chunks get passed to the language model to actually generate your answer.</p><h2>Why this Matters</h2><p>Now why does this three-stage process matter for content creators?</p><p>Because different content optimizations help at different stages. </p><ol><li><p>Clear, descriptive titles help stage one&#8217;s keyword matching. If your title is vague or generic, the initial retrieval might miss relevant content entirely. </p></li><li><p>Consistent terminology helps stage two&#8217;s semantic scoring&#8212;if you use three different terms for the same concept, you&#8217;re making it harder for the re-ranker to recognize relevance. </p></li><li><p>And focused, single-topic chunks help stage three generation, because the language model can work with content that&#8217;s directly relevant rather than having to extract the useful parts from a mixed-topic chunk.</p></li></ol><p>This is why understanding the pipeline matters. You might not be optimizing for one search&#8212;you could be optimizing for a three-stage process, and each stage has different needs.</p><h2>Practical Implications</h2><p>So what does this mean for how you create content?</p><p><strong>First: Think in chunks from the beginning.</strong> Don&#8217;t create long documents and hope the chunking works out. </p><p>Design each section to be a complete, standalone unit&#8212;one topic, clearly titled, with all necessary context. Because that&#8217;s what the RAG system will work with. If your content naturally maps to good chunks, you&#8217;re starting with an advantage.</p><p><strong>Second: Optimize for each stage of the pipeline.</strong> That means descriptive titles for retrieval, consistent terminology throughout for re-ranking, and focused topics for generation. </p><p>A chunk titled &#8220;Overview&#8221; tells the retrieval system nothing. A chunk titled &#8220;Installing the Database on Linux Servers&#8221; tells it exactly what&#8217;s inside.</p><p><strong>Third: Test how your content actually chunks.</strong> If you&#8217;re serious about AI-ready content, you need to see what happens when it gets chunked. </p><p>Does a procedure get split? Does the context get separated from the steps? Understanding this lets you adjust your structure before problems appear.</p><p>RAG isn&#8217;t magic. </p><p>It&#8217;s a mechanical process with specific steps, and understanding those steps completely changes how you think about content structure. </p><p>You&#8217;re not writing for human readers who can flip back and forth through a document. You&#8217;re writing for a system that will fragment your content, search through those fragments, and reassemble pieces into answers.</p><p>That requires a different approach to organization, and that&#8217;s what I&#8217;m exploring with structured content and microcontent principles.</p><h2>What&#8217;s Next</h2><p>So the next time someone tells you they&#8217;re optimizing content for RAG, ask them how and for what kind of RAG? Because that detail matters.</p><p>I&#8217;m currently developing testing protocols to verify some of these AI-ready content methods&#8212;actually putting structured content approaches through systematic evaluation with RAG systems. </p><p>If you&#8217;re working on similar research or want to compare notes, reach out. I&#8217;m exploring this stuff out loud because I think we&#8217;re all trying to figure out what good content looks like in this new context.</p><p>Until next time&#8212;keep reading deeply.</p><div><hr></div><h2>Sources</h2><p>Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J., Wang, M., &amp; Wang, H. (2024). Retrieval-augmented generation for large language models: A survey. <em>Transactions of the Association for Computational Linguistics</em>, 12, 1-25. <a href="https://arxiv.org/abs/2312.10997">https://arxiv.org/abs/2312.10997</a></p><p>Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., K&#252;ttler, H., Lewis, M., Yih, W., Rockt&#228;schel, T., Riedel, S., &amp; Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. <em>Advances in Neural Information Processing Systems</em>, 33, 9459-9474. <a href="https://arxiv.org/abs/2005.11401">https://arxiv.org/abs/2005.11401</a></p><p>Nogueira, R., Yang, W., Cho, K., &amp; Lin, J. (2019). Multi-stage document ranking with BERT. arXiv preprint. <a href="https://arxiv.org/abs/1910.14424">https://arxiv.org/abs/1910.14424</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/what-is-rag-no-really-what-is-it/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.isophist.com/p/what-is-rag-no-really-what-is-it/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Accessibility-First Thinking in the World of AI]]></title><description><![CDATA[Reflections from CAKE 2025]]></description><link>https://www.isophist.com/p/accessibility-first-thinking-in-the</link><guid isPermaLink="false">https://www.isophist.com/p/accessibility-first-thinking-in-the</guid><dc:creator><![CDATA[Lance Cummings]]></dc:creator><pubDate>Tue, 30 Sep 2025 12:06:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rSpu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rSpu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rSpu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rSpu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rSpu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rSpu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rSpu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg" width="1456" height="1132" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1132,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:838481,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.isophist.com/i/174865475?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rSpu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rSpu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rSpu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rSpu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21055dde-8653-4fdf-b14e-26b0a451cd8b_2304x1792.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Generated by Adobe Firefly.</figcaption></figure></div><p>Last week at the CAKE Conference in Krak&#243;w, I found myself surrounded by brilliant minds tackling the intersection of content and AI. </p><p>For those of you who don&#8217;t know, <a href="http://cakeconf.contentbytes.pl">CAKE </a>is a new conference in Krak&#243;w, Poland organized by a vibrant content and writing community that I&#8217;ve been working with for years (and a cornerstone to my design thinking study abroad). </p><p>It is truly one of the most exciting and welcoming writing communities out there! Come see us next year. </p><p>(Krak&#243;w is also one of the best cities in Europe!)</p><p>While many sessions in this conference reinforced skills I&#8217;d seen at technical communication conferences, one theme kept surfacing that I wasn&#8217;t exactly expecting: <strong>accessibility</strong>.</p><p>What struck me wasn&#8217;t just that accessibility appeared in so many sessions, but how it revealed something fundamental about our relationship with AI that often gets missed. </p><div class="pullquote"><p>Technical writers aren&#8217;t just writers, we are custodians of information experiences. When you approach content design with accessibility as a guiding principle, you create better information experiences across the board.</p></div><p>We&#8217;re approaching AI from the wrong angle entirely if our only focus is on efficiency, productivity, or even ethics broadly speaking.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">My work at this conference was partially funded by the paid subscribers of this Substack. Thank you!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2>The Accessibility Revelation</h2><p>Rather than treating accessibility as a specialized concern or compliance requirement, these presentations helped me think of it as a fundamental lens for understanding how content works&#8212;for everyone. </p><p>Three speakers approached accessibility from different angles: Anna Dulny-Leszczy&#324;ska focused on structural principles for assistive technologies, Sara Gr&#261;dziel explored automation and AI integration, and Wojtek Kuty&#322;a challenged us to see accessibility as inherently political and human. </p><p>For me, they demonstrated why accessibility thinking should be central to any content strategy, especially one involving AI.</p><h3>Structure and Semantics</h3><p>Anna, a product designer at Synergy Codes, opened her presentation with a powerful demonstration using the viral <a href="https://en.wikipedia.org/wiki/The_dress">dress photo</a> from a few years where different people saw different colors. She asked the audience whether they saw it as white and gold or blue and black. This illustrates how we all process visual information differently. But this kind of diversity extends far beyond perception to language, ways of thinking, and physical abilities.</p><p>She outlined three core principles for accessible content design. </p><p><strong>Structure:</strong> Make content logical for assistive technologies. She noted that 71% of screen reader users navigate by headings, jumping from heading to heading to decide what content to consume. This requires descriptive, unique headings, proper use of lists, and avoiding text embedded in images. </p><p><strong>Understandability</strong>: Ensure everyone shares the same comprehension by using terms familiar to your specific audience and following plain language guidelines. </p><p><strong>Operability:</strong> Make actions predictable and clear. She showed how Figma&#8217;s &#8220;lock/unlock&#8221; button creates confusion because users can&#8217;t predict what will happen when they click it.</p><p>What&#8217;s striking is how these same principles also help machines work with content at a scale &#8230; Or at least help us think about ways our content can work with diverse machines, not just humans.</p><h3>Automation Layers</h3><p>Sara Gr&#261;dziel&#8217;s  session posed an equally compelling question: Can accessibility become as simple as spell check? She framed automation as a paradigm shift for accessibility, where we now need to understand how machines play a role in developing accessible content.</p><p>Sara outlined three layers of accessibility work. </p><p><strong>Automation layer: </strong>This involves<strong> </strong>code checks like alt text presence, color contrast ratios, form label verification, HTML validity, and heading structure. Prime targets for AI agents.</p><p><strong>AI-Assisted layer:</strong>  This requires human collaboration to evaluate alt text accuracy and relevance, discover patterns of repeated issues, assess link text clarity, check heading semantics, and evaluate form error handling. This is where you need human-machine collaboration.</p><p><strong>Human-only layer:</strong>  This layer is about checking tone of auto-generated content, testing keyboard and screen reader navigation, assessing content clarity and cognitive load, and experiencing real assistive technology use. Best leave AI out of this, at least with current technologies.</p><p>She emphasized that automation isn&#8217;t just about checking stuff, but needs to include practical implementation guidance, contextual review of issues, AI-generated content suggestions, and continuous monitoring rather than one-time reports.</p><p>Also known as &#8220;human-in-the-loop.&#8221; So, yeah, accessibily-first thinking will be key to keeping your job in an AI economy.</p><h3>The Human Dimension</h3><p>Wojtek Kuty&#322;a&#8217;s presentation on accessible web content took a strikingly different approach&#8212;personal, political, and unapologetically human. </p><p>&#8220;Language is ours&#8212;and texts are political.&#8221;</p><p>In other words, content choices have real consequences for people&#8217;s lives, and writers have the power to harm, exclude, or help through their language choices. I would argue the same when developing AI content systems.</p><p>Wojtek emphasized that content must be easy to understand for actual users, not directors or executives. He compared overly formal insurance language (&#8221;Acts of God! Insurrection!&#8221;) with clearer alternatives (&#8221;natural disasters, war, or civil unrest&#8221;). He asked content professionals to make content engaging and human, use headings and sections to break up text, test with real people, and remember that just because you think its good, doesn&#8217;t mean it is.</p><p>He advocated for an intersectional perspective, acknowledging that we all exist in a matrix of privilege and vulnerability, and our writing choices reflect that. For me, this is also true of AI.</p><h3>Why This Matters for AI</h3><p>What struck me most about these three presentations was a realization they didn&#8217;t all explicitly state but collectively demonstrated: the same structural principles that help screen reader users also create exactly the kind of systematic, well-organized information that AI systems process most effectively. </p><p>When you design content with proper heading structure, logical hierarchy, and descriptive labels for human assistive technologies, you&#8217;re simultaneously optimizing for AI retrieval systems. </p><p>The semantic richness that makes content accessible to humans makes it processable by machines. And Wojtek&#8217;s emphasis on human, political, context-aware writing reminds us that accessibility thinking goes beyond technical compliance to fundamental questions about how we communicate with and respect people.</p><p><strong>Technical writers aren&#8217;t just writers, we are custodians of information experiences. When you approach content design with accessibility as a guiding principle, you create better information experiences across the board.</strong></p><p>This insight connects to something I&#8217;ve been exploring in my work on rhetorical theory&#8212;the ancient Greek concept of <a href="https://en.wikipedia.org/wiki/Metis_(mythology)">metis</a>, or cunning intelligence. Metis represents the ability to adapt, improvise, and find clever solutions to complex problems. It&#8217;s not just theoretical knowledge or technical skill, but practical wisdom applied creatively to overcome constraints.</p><p>Accessibility-first thinking is metis in action.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:150220532,&quot;url&quot;:&quot;https://www.isophist.com/p/how-to-channel-the-goddess-of-wisdom&quot;,&quot;publication_id&quot;:1639524,&quot;publication_name&quot;:&quot;Cyborgs Writing&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!cnci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd41b2ae-512f-4bbc-8ca0-1dc31a7a8641_500x500.png&quot;,&quot;title&quot;:&quot;How to Channel the Goddess of Wisdom in Your AI Collaborations&quot;,&quot;truncated_body_text&quot;:&quot;Well, I&#8217;m back from a brief pause &#8230; after getting attacked by colds from two directions. My twin 5 year olds, and the college students in my classroom.&quot;,&quot;date&quot;:&quot;2024-10-15T11:13:32.966Z&quot;,&quot;like_count&quot;:7,&quot;comment_count&quot;:6,&quot;bylines&quot;:[{&quot;id&quot;:129389476,&quot;name&quot;:&quot;Lance Cummings&quot;,&quot;handle&quot;:&quot;lancecummings&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd589e8cc-4070-4e52-a3e0-82f218982383_3751x5626.jpeg&quot;,&quot;bio&quot;:&quot;AI Content Specialist &amp; Professor | Exploring how to leverage structured content with rhetorical strategies to improve the performance of generative AI technologies&nbsp;both in the workplace and the classroom.&quot;,&quot;profile_set_up_at&quot;:&quot;2023-05-05T12:17:54.225Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-05-05T13:11:38.019Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:1613029,&quot;user_id&quot;:129389476,&quot;publication_id&quot;:1639524,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:1639524,&quot;name&quot;:&quot;Cyborgs Writing&quot;,&quot;subdomain&quot;:&quot;lancecummings&quot;,&quot;custom_domain&quot;:&quot;www.isophist.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Exploring the creative interface between human and machine in writing, the classroom, and the workplace&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd41b2ae-512f-4bbc-8ca0-1dc31a7a8641_500x500.png&quot;,&quot;author_id&quot;:129389476,&quot;primary_user_id&quot;:129389476,&quot;theme_var_background_pop&quot;:&quot;#67BDFC&quot;,&quot;created_at&quot;:&quot;2023-05-05T12:23:47.529Z&quot;,&quot;email_from_name&quot;:&quot;Lance Cummings from iSophist&quot;,&quot;copyright&quot;:&quot;Lance Cummings&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:1632758,&quot;user_id&quot;:129389476,&quot;publication_id&quot;:1658183,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:1658183,&quot;name&quot;:&quot;Poetica Machina&quot;,&quot;subdomain&quot;:&quot;poeticamachina&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;My Substack for AI-Inspired poetry and fiction.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a71cfa47-0dc8-4276-bf27-8b1b8c17188c_500x500.png&quot;,&quot;author_id&quot;:129389476,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#0068EF&quot;,&quot;created_at&quot;:&quot;2023-05-13T19:11:52.088Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Lance Cummings&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:2078204,&quot;user_id&quot;:129389476,&quot;publication_id&quot;:2075305,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:2075305,&quot;name&quot;:&quot;Top of the Stack&quot;,&quot;subdomain&quot;:&quot;uncw&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Cultivating tomorrow's digital creators, one post at a time&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bf7fd8d6-09c6-4aae-84d2-2a80967b2eae_1024x1024.png&quot;,&quot;author_id&quot;:129389476,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#0068EF&quot;,&quot;created_at&quot;:&quot;2023-11-02T02:24:31.791Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Lance Cummings&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:5,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:5,&quot;accent_colors&quot;:null}}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.isophist.com/p/how-to-channel-the-goddess-of-wisdom?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!cnci!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd41b2ae-512f-4bbc-8ca0-1dc31a7a8641_500x500.png" loading="lazy"><span class="embedded-post-publication-name">Cyborgs Writing</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">How to Channel the Goddess of Wisdom in Your AI Collaborations</div></div><div class="embedded-post-body">Well, I&#8217;m back from a brief pause &#8230; after getting attacked by colds from two directions. My twin 5 year olds, and the college students in my classroom&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 years ago &#183; 7 likes &#183; 6 comments &#183; Lance Cummings</div></a></div><h2>Accessibility as Strategic Intelligence</h2><p>When content creators approach accessibility strategically rather than as mere compliance, they develop a form of cunning intelligence that naturally improves all aspects of content design. </p><p>Consider how accessibility principles mirror the foundational elements of effective AI collaboration:</p><p><strong>Clear structure helps everyone.</strong> When you organize content with proper headings, consistent navigation, and logical information hierarchy for screen readers, you&#8217;re also creating the kind of well-structured information that AI systems can process most effectively.</p><p><strong>Plain language serves multiple audiences.</strong> Writing that works for users with cognitive differences also works better for non-native speakers, tired readers, and AI systems trying to extract meaning from your content.</p><p><strong>Alternative formats expand reach.</strong> Creating content in multiple modalities&#8212;text, audio, visual&#8212;doesn&#8217;t just serve users with different abilities; it provides AI systems with richer, more contextual information to work with.</p><p>What the CAKE speakers demonstrated repeatedly was that accessibility-first thinking creates content that performs better across every metric that matters: user engagement, search visibility, translation quality, and yes, AI effectiveness.</p><p>Now, some might point out a technical limitation here: LLMs process text by breaking it down into tokens and calculating positional relationships between them. In this process, semantic structures like heading tags or ARIA labels get tokenized just like regular text. The model doesn&#8217;t inherently &#8220;understand&#8221; that an <code>&lt;h1&gt;</code> tag signals something different from a <code>&lt;p&gt;</code> tag the way a screen reader does.</p><p>This is technically accurate&#8212;but it misses the bigger picture about how accessibility thinking shapes AI content systems.</p><p>Accessibility-first design influences AI effectiveness at multiple levels beyond just text generation. When you structure content with clear information hierarchy, you improve how retrieval systems find and rank information before any LLM even starts generating text. </p><p>When you write in plain language and chunk information logically, you provide better context windows for models to work with. </p><p>When you create multiple modalities and build in redundancy, you enable more robust content systems that can adapt to different uses and users.</p><p>The real value is in how accessibility principles shape content architecture, information workflows, and system design, all of which influence how well AI tools retrieve, process, and work with information at every stage, not just during generation. </p><p>Many AI-assisted content systems rely on structured data, metadata, and semantic layers for retrieval, routing, and orchestration. These elements remain crucial even if they get tokenized away during the generation step itself.</p><p>The goal isn&#8217;t to make LLMs magically &#8220;understand&#8221; accessibility features. It&#8217;s to recognize that the strategic thinking that produces accessible content also produces the kind of systematic, well-organized information architecture that makes AI systems more reliable and effective.</p><h2>The Metis Advantage in Content Strategy</h2><p>This approach to accessibility represents a sophisticated form of metis because it requires seeing beyond immediate constraints to identify opportunities for systemic improvement. </p><p>Rather than viewing accessibility requirements as limitations or constraints, this way of thinking recognizes them as design principles that enhance the entire content ecosystem.</p><p>A technical writer applying this approach might create documentation that works seamlessly for users with visual impairments, non-native speakers, and AI summarization tools simultaneously, not through separate efforts, but through integrated design decisions that serve all these needs at once.</p><div class="pullquote"><p>Instead of &#8220;How can we use AI to create more content faster?&#8221; perhaps we should ask &#8220;How can we create content systems that serve the widest possible range of human needs while also enabling AI to enhance rather than replace human insight?&#8221;</p></div><p>A content strategist with this mindset designs information architectures that support assistive technologies while also providing the semantic richness that makes AI tools more effective and reliable.</p><p>Most discussions about AI in content creation focus on efficiency, automation, and keeping up with technological change. But the accessibility perspective suggests we&#8217;re asking the wrong questions entirely. </p><p>Instead of &#8220;How can we use AI to create more content faster?&#8221; perhaps we should ask &#8220;How can we create content systems that serve the widest possible range of human needs while also enabling AI to enhance rather than replace human insight?&#8221;</p><p>This reframe reveals accessibility not as a specialized concern but as a fundamental lens for understanding content quality in an AI-enabled world. When you optimize for different ways of thinking, different abilities, and different technologies, you naturally create the conditions where human-AI collaboration flourishes most effectively.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/subscribe?"><span>Subscribe now</span></a></p><h2>The Path Forward</h2><p>The CAKE Conference reinforced my conviction that accessibility thinking should be central to any serious content strategy, especially one that involves AI tools. This isn&#8217;t just about moral imperative (though that matters too)&#8212;it&#8217;s about strategic advantage.</p><p>Organizations that embed this kind of strategic thinking into their content operations will build more resilient, more effective, and more genuinely innovative content systems. They&#8217;ll create content that serves diverse human needs while also providing AI systems with the structured, contextual information needed for truly helpful automation.</p><p>The ancient Greeks understood that metis, or cunning intelligence, was essential for navigating complex, changing conditions. As we navigate the rapidly changing world of AI-enabled content creation, accessibility thinking offers us exactly this kind of practical wisdom: a way to find clever, adaptive solutions that serve human needs while harnessing technological capabilities.</p><p>The most sophisticated AI strategy isn&#8217;t about chasing the latest tools or maximizing automation. It&#8217;s about developing the cunning intelligence to create content systems that expand access, enhance understanding, and amplify human creativity rather than replacing it.</p><p>That&#8217;s a form of metis worth cultivating.</p><div><hr></div><p><em>What connections do you see between accessibility thinking and your own AI content strategy? I&#8217;d love to hear about your experiments with this approach&#8212;reply to this email and share your insights.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.isophist.com/p/accessibility-first-thinking-in-the/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.isophist.com/p/accessibility-first-thinking-in-the/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item></channel></rss>