I've been exploring AI "out loud" on LinkedIn and Substack for years, but I realized I needed something more focused to bridge the gap between complex academic research and practical application. While I love researching, I get more fulfillment from engaging with ideas online than from traditional academic publishing—though I still need to publish for my day job.
That's why I'm starting Deep Reading: to take key research on AI and writing and make it accessible in bite-sized episodes that content professionals, creators, and educators can actually use. Think of it as academic research made practical, without losing the depth that makes it valuable.
In our inaugural episode of Deep Reading, we dive into Lynette Hunter's foundational 1991 article, "Rhetoric and Artificial Intelligence," which establishes an important connection between classical rhetoric and modern AI that remains relevant even in today's era of generative AI.
(If you would like the PDF, just send me a direct message via Substack or Linkedin).
Key Insights from This Episode:
The Historical Foundation: Hunter traces the tension between rhetoric and AI back to the 16th and 17th centuries when science began separating itself from the humanities. Thinkers like Francis Bacon and Thomas Hobbes dreamed of a "pure language" that could precisely represent reality without the messiness of interpretation or context. Sound familiar? This same aspiration drives much of AI development today.
Tautological Worlds: Perhaps Hunter's most valuable concept is what she calls "tautological worlds" – self-contained systems operating according to their own internal logic, disconnected from our messy reality. This perfectly describes how modern AI models function: trained on vast text data and excellent at identifying patterns, but with no actual connection to the physical world or understanding of social contexts.
From Chess Rules to ChatGPT: Think of AI like a chess game: within the 64 squares of a chessboard, specific rules determine what's possible. The rules work perfectly within the game but say little about the world outside. Similarly, today's language models generate impressively coherent text following patterns they've observed but can confidently present fiction as fact because they have no mechanism to distinguish between the two.
The Pattern Recognition Problem: When AI encounters something that doesn't fit its patterns, it simply "moves on, does not address the difficulty, sees it as failure rather than as the potential location for context." This explains why our AI tools produce impressively coherent text until they suddenly don't – generating hallucinations or confidently stating falsehoods.
Practical Applications for Content Professionals:
Embrace Complementary Strengths: AI excels at pattern recognition while humans bring contextual understanding. Effective content operations leverage both rather than trying to make AI do what it fundamentally cannot.
Be Builders, Not Just Users: Our interactions with AI should be part of a conversation that shapes where AI goes. Don't just accept outputs at face value – actively shape these systems by providing contextual frameworks.
Reframe "Limitations" as Opportunities: The places where AI systems struggle aren't bugs – they're features that reveal the inherently social nature of communication and create opportunities for human expertise.
Hunter's analysis remains strikingly relevant thirty years later, providing a thoughtful framework for understanding both the capabilities and limitations of today's AI writing tools.
By recognizing the historical and philosophical underpinnings of AI systems, we can work more effectively with them while preserving the uniquely human elements of communication.
About Deep Reading: A biweekly podcast where I take short, focused dives into research that helps us understand and implement AI writing systems in purposeful and ethical ways.
Share this post