Cyborgs Writing
Cyborgs Writing Podcast
The Man Who Predicted ChatGPT in 1998
0:00
-8:17

The Man Who Predicted ChatGPT in 1998

Deep Reading Podcast, Ep.3

This post accompanies Deep Reading Episode: "The Man Who Predicted ChatGPT in 1998" — Listen by clicking above or find the full transcript with notes at Cyborgs Writing. New episodes drop biweekly (or so 😉).


Your AI outputs are only as good as your information design.

Most of us approach AI tools like we approach paragraphs—dump everything together and hope for the best. But what if there was a systematic way to structure information that made both human communication and AI interactions dramatically more effective?

In 1998, a researcher named Robert Horn wrote something that sounds eerily prophetic today: "Any subject matter consists of all the sentences and images used by human beings to communicate about that subject matter."

He was describing what would become LLM training data—twenty-five years before ChatGPT existed.

🔍 What Horn Got Right About AI

Horn developed "Information Mapping" in the 1960s, creating six core information types: procedure, process, principle, concept, fact, and structure. But his bigger insight was recognizing that information exists in relationships, not isolation.

Sound familiar? That's exactly how transformer models work—identifying relational patterns across massive amounts of text.

Horn's most prescient observation: "Anything written is potentially instructional." Every clear email, well-structured report, and thoughtful explanation teaches something about how humans communicate. This instructional quality embedded in all human writing is precisely what makes LLM training possible.

🧱 Four Principles That Work for Humans and Machines

Horn's framework rests on principles that sound remarkably modern:

Chunking: Break information into functional units instead of rambling paragraphs that mix definitions, examples, and procedures. When you feed AI well-chunked content, it doesn't have to untangle functional chaos.

Relevance: Each chunk serves a specific purpose. Think about how much better your AI interactions become when you use clear sections like "Context," "Task," "Examples," and "Constraints."

Labeling: Make the function explicit. Instead of generic prompts, try organizing by information type: "Concept needed: Definition of X. Procedure needed: Five steps. Principle needed: Why this works."

Consistency: Use the same structure for the same type of information. This helps both humans and AI recognize and generate different content types more reliably.

Three Ways to Apply This Today

1. Structure your prompts by information type. Instead of: "Help me create a blog post about project management that explains what it is and how to do it," try: "Context: Writing for new managers. Task: Blog post. Concept: Project management definition. Procedure: Five basic steps."

2. Treat your documentation as AI knowledge (connected information). When you write clear procedures and structured explanations, you're not just helping humans—you're creating content that improves AI outputs across your organization.

3. Develop templates based on function. Create standard structures: concept explanations always include definition, examples, distinctions; procedures follow consistent step formats; principles connect rules to reasoning.


💡 Why This Matters Now

Horn understood something we're just discovering: information design isn't just about organizing content—it's about creating systematic approaches to knowledge that scale across users and contexts.

In our AI age, those who can structure information strategically won't just get better outputs from ChatGPT. They'll influence what millions of AI-generated texts say and how they say it.

The future belongs to the information designers, not just the writers of paragraphs.

📎 Check out my Robert Horn Sources on my Readwise page.


What's your experience with structured content in AI workflows? Have you noticed better results when you organize prompts by function? Share your insights in the comments.

Listen to the full episode for deeper context on Horn's research and more practical applications. Subscribe to Deep Reading via Substack, any podcast app, or RSS.

Leave a comment

Discussion about this episode

User's avatar