The Techne Behind Agent Skills
Its not just about tasks

There’s an old philosophical grudge against craft.
It goes back to Plato, who compared rhetoric to cooking in the Gorgias dialogues. Both rhetoric (or speech-making) and cooking produce pleasing results, but neither understands the true principles behind what it makes.
He called this mere empeiria, or a knack, habit, unreflective routine. You learn what works without knowing why. For Plato, its the lowest form of knowledge and why many philosophers (and now academics) see practice as less worthy of attention.
Aristotle pushed back.
In the Nicomachean Ethics, he defines techne as “a productive state that is truly reasoned” … not just the ability to make something, but making with genuine understanding of the principles behind it.
The practitioner with empeiria knows that something works. The practitioner with techne knows why it works.
I’ve been thinking about this distinction a lot lately while building custom AI agent skills. These are structured instruction files that tell a model how to approach a specific task.
Most people build them the way Plato described rhetoric: run it, tweak it, run it again until the output looks right. Learn what works without ever understanding why.
That’s empeiria. And it only gets you so far.
The principled understanding (or techne) comes from a discipline that has spent decades thinking carefully about how humans organize and communicate information.
For example, technical communicators, particularly those working with structured authoring systems like DITA, have long organized content into five functional categories called information types.
These aren’t arbitrary divisions. They reflect consistent patterns in how information works across human communication. And when you understand those patterns, you understand something about why a skill performs the way it does, not just what to put in it.
Manny Silva recently made the case that agent skills are a form of documentation, held to a higher standard of precision than anything written for human readers.
He’s right. But I’d push the argument further.
The reason most skills underperform isn’t just that the steps are vague. It’s that the whole file is often written as a single type of content.
But a well-performing skill actually contains several kinds of content (not just tasks), each signaling a different purpose to the model through patterns it has been shaped to recognize.
Information types and what they signal
If you’ve been following this newsletter, you’ve seen information types come up before in how I structure AI-ready knowledge and even some prompts. For those newer to the idea, here’s the short version.
Information types are five distinct patterns of content, each with a recognizable purpose:
Reference states something the reader needs to know.
Concept explains something the reader needs to understand.
Principle advises what to do or not do, and when.
Process illustrates how something works at the system level.
Task instructs the reader on the specific steps to take.
These categories come out of the practice of content strategy and shows how philosophy or academia can inform our practice. These information types aren’t just best practices, but actual patterns humans have developed and used consistently across centuries of written communication: in manuals, textbooks, legal codes, scientific papers, policy documents.
Any sufficiently large corpus of human-produced text is saturated with them, which is why they work with AI.
When you write in those patterns deliberately, you’re not teaching the model something new. You’re giving it a clearer signal about what kind of content this is and what it’s for.
A model shaped on that corpus has encountered these patterns countless times. When you write in them deliberately, you’re not teaching the model something new. You’re giving it a clearer signal about what kind of content this is and what it’s for.
In classical rhetoric, Aristotle described topoi as standard categories of thought that speakers could draw upon to construct a response.
Information types work similarly … not as cognitive locations, but as recognizable patterns that carry purpose. Organize your content by type, and you’re giving AI strong signal. Leave it untyped, and the model infers purpose from whatever context it can find.
Sometimes that inference is fine. Often it’s close but wrong in ways that are hard to diagnose. It becomes a skill that technically works but doesn’t quite perform.
The Problem with Task-Only Skills
If you don’t yet know what a skill is, it is simply a set of instructions an AI model refers to for specific task.
Sound familiar? Well, it should. Its basically a prompt.
When you ask Claude to create a document and watch it work, you’ll notice it references a skill. That skill is a markdown file (.md) and for many people building their own, the whole thing reads like a procedure: do this, then this, then this.
Task information is exactly right for execution steps. But a skill is also a description of what the output is supposed to be, a set of behavioral constraints, a map of how this task connects to the larger workflow, and the metadata that triggers the skill in the first place.
When all of that gets written as task steps (or skipped entirely) the model fills the gaps from general training. And that training may not match your context.
This is why a skill can produce technically correct output that still feels off. The steps were followed. But the patterns that signal what kind of output this is, what constraints apply, how this task fits a larger system are absent. The model filled that gap from wherever it could.
Or, as I’ve noticed, the model calls up the skill at the wrong time (or fails to call it up at the right time).
So I thought, why not information type my weekly class plan skill. This is the skill I use across projects to make sure that my weekly plans that I send students look the same and function the same way.
This saves me considerable work, while adding value to student experience.
My rewritten class plan skill produced noticeably better output on the first run. The surface result looked similar, but the output was more precise and consistent.
Here is how I organized the skill with information types.
A Concept block comes first. What a weekly class plan is. Not a schedule, but a student-facing document that bridges course design and classroom practice, written like a knowledgeable colleague talking to a student, not a syllabus. Without this, the model supplies its own understanding of “weekly class plan.” Sometimes that’s fine. Often it’s close but wrong in ways that are hard to diagnose.
Principle blocks group behavioral constraints separately. What the model must always do, what it must never do, under what conditions it should stop and verify. Writing them in their own section, in direct second-person language, makes them clearer.
A Process block gives system awareness, or how the skill fits into a bigger context. For example, most of my weekly plans point towards a specific deliverable. I prefer to introduce an idea or skill, then spend time in class applying that in ways that move them forward on a deliverable. A model that only has the task steps produces a plan. A model that understands the course arc produces a plan that fits my pedagogy.
Reference is the front matter, which is usually the name and description that trigger the skill. Claude scans that description against your request to decide whether to load the skill at all. Vague descriptions mean missed triggers. That’s not a configuration problem. It’s a writing problem.
The Task steps come last, as the final instruction before execution.
Agent skills are just another form of prompt design. Prompt design is just another form of information design. And information design, done well, is how you actually build the context around your AI workflows, systematically with the same discipline that technical writers have been applying to human readers for decades.
The Techne of Designing Context
Restructuring this skill didn’t just improve one output, it provides context for every moment where I might need a weekly plan for my students.
This is why structured prompting is still relevant. Its the starting point for context engineering and system design.
A structured prompt is how you give the model clear instructions for a single interaction.
Structured knowledge is how you build the environment the model operates in.
Context engineering is the full practice of designing that environment intentionally. Not just what you ask for, but everything the model carries into the task.
Agent skills are just another form of prompt design. Prompt design is just another form of information design. And information design, done well, is how you actually build the context around your AI workflows, systematically with the same discipline that technical writers have been applying to human readers for decades.
This is what Aristotle meant by techne. Not the knack of someone who has run the same skill fifty times and learned what tends to work.
The reasoned understanding of someone who knows why it works.
This is the kind of knowledge that content professionals bring to our conversations about AI, and what makes writers more valuable then ever in the age of AI.


