This is a copy of a keynote address I recently gave to colleagues in Islamabad, Pakistan for the Enhancing Pak-US Partnership: Faculty Training and Development
IIUI-UNCW Follow-University Partnerships Grant Project on September 11, 2023.
It is perhaps my best articulation of why and how I approach AI the way I do, so I thought I would share this with you.
Welcome, esteemed scholars and distinguished guests, to this important gathering aimed at enhancing the Pakistan-US partnership. The title of my keynote today is "Cyborgs in the Global Classroom: How Attitudes and Beliefs Shape Our AI Response."
In an era where technology is deeply integrated into every aspect of our lives, the question is no longer if we will interact with artificial intelligence, but how. This becomes especially significant in the context of education, where the influence of AI is rapidly expanding.
Today, I will explore how different attitudes—humanist and posthumanist—affect our interactions with technology. I will also delve into how my own religious background, in the Eastern Orthodox tradition, shapes my perspectives on this important issue.
I share this not to impose my own religious views but as an example to inspire us all to reflect on what underlies our attitudes towards technology. In a room filled with devout scholars from the Islamic tradition, I see an opportunity for us to build on our diverse perspectives like a grand mosaic, enriching our collective understanding and fostering an environment of mutual respect and learning.
Exploration of Humanist vs. Posthumanist Attitudes
In his 2002 article, "Why Technology Matters to Writing: A Cyberwriter's Tale," James Porter recounts his journey from pencil-based handwriting to the interconnected computer. Porter's tale is not just a nostalgic look back but a critical examination of how technology fundamentally shapes our writing processes. His story resonates with me, and it should with all of us here, for it addresses a universal question: Why and how does technology matter to writing?
Porter shares an anecdote about how, in his Catholic education, good penmanship was viewed as a sign of virtue, discipline, and character. This focus on the form and appearance of writing, rather than its content, reflects a product-oriented writing pedagogy that was prevalent in his early years … probably still is today in some degree.
Porter goes on to argue that we need to adopt a more posthumanist approach to technology and writing. He points out that technology often remains invisible, as do the ideologies, attitudes, and beliefs that those technologies embody, especially when we teach writing. Porter advocates for a recognition of technology as an intrinsic part of what it means to be human. In his words, technology shapes who we are, whether we like it or not.
Bruno Latour, a key figure in the field of Science and Technology Studies, offers a nuanced perspective on posthumanism that complements Porter's views. Latour argues that we exist in a complex network of not just human agents, but also non-human agents — things, objects, and technologies. In this network, all entities possess agency and contribute to actions and outcomes.
This idea is encapsulated in his Actor-Network Theory, where humans and non-humans are intertwined in a web of relationships that shape our experiences, including our approach to writing and technology. Latour's perspective forces us to reconsider traditional humanist notions of agency and control, inviting us to see ourselves as part of a larger, more interconnected system where agency is distributed.
Humanist Perspectives
Humanism, rooted in the belief that human beings are the center of the ethical, intellectual, and creative universe, has a profound influence on how we perceive and interact with technology, including writing and generative AI. Humanism often elevates the values of individual creativity, free will, and rational thought. In this framework, technology is often viewed as a tool, subservient to human agency and creativity. In essence, the human is the 'actor,' and technology is the 'acted upon.'
Humanism as an ideology has multiple facets, particularly in its interaction with technology. On one hand, there are humanists who view technology as a tool that can enhance human capabilities, serving to elevate our individual creativity, rationality, and free will. These individuals often see generative AI as a utility that can enhance educational outcomes and make our lives more efficient.
However, another strand of humanism is more skeptical, even resistant, to the incorporation of technology in human life. These are individuals who perceive technology, especially something as potent as AI, as a potential threat to the very qualities that make us human—our ability to think independently, our creative spirit, and our moral and ethical reasoning. Some even advocate for banning or heavily restricting AI in educational settings, fearing that it might "dumb down" the next generation, making them dependent on algorithms for thought processes that should be fundamentally human.
In a classroom setting, while one group of humanists might welcome AI tools as a means to improve writing skills, this skeptical subset would likely oppose it, worrying that reliance on such technology could erode the students' ability to think critically and write creatively. They argue that technology should not replace or even mimic human intellectual processes; the human mind is unique and should remain unchallenged in its abilities.
But there is a third way.
Posthumanism: A Paradigm Shift in How We Engage with Technology
Bruno Latour argues that we exist within a network of not just people, but also things, all of which exert agency. In a posthumanist framework, technology isn't merely a tool manipulated by human hands; it's an active agent that shapes and is shaped by human interaction. This perspective radically transforms our understanding of writing and generative AI, placing them within a larger system of relationships that influence and are influenced by each other.
Imagine a classroom where students are using AI to assist with their writing assignments. From a posthumanist perspective, the AI isn't just a 'tool' that the students are using. It’s a part of the classroom's network of agency. The software’s algorithms, programmed by human engineers with their own biases and perspectives, influence the way students construct sentences, develop arguments, and even think about the act of writing itself. Meanwhile, students' interactions with the AI tool feed data back into the system, possibly influencing future iterations of the software. Here, agency is distributed: humans, technology, and even the classroom environment itself are interconnected in a web of reciprocal influence.
While humanists may see generative AI as either enhancing or threatening human capabilities, posthumanists view this technology as part of a larger ecosystem of agency. For posthumanists, the question isn't just about how AI affects us, but also how we affect AI and how this mutual influence reshapes the learning ecosystem as a whole.
In humanism, the focus is largely on preserving or enhancing human qualities like rationality and creativity. Technology is often seen as either a means to this end or a threat to it. Posthumanism, on the other hand, sees these human qualities as existing in a network that includes technology, where agency is distributed and collaborative.
In a posthumanist framework, the dynamics between students, educators, and generative AI are transformative. We're not just teaching students how to use AI as a tool for writing or warning them of the perils of reliance on technology. Instead, we're preparing them to actively shape AI even as it shapes them.
Writers have been shaping the world for centuries with all kinds of technologies … why not AI.
Students are taught to see AI not merely as a program that corrects grammar or generates text but as an entity that has been programmed with specific biases, limitations, and capabilities. They are encouraged to question and probe these elements critically. For example, why does the AI suggest certain phrases over others? How might these suggestions influence the tone or argument of an essay? How can we re-train, fine-tune, or shape AI in better ways?
Writers have been shaping the world for centuries with all kinds of technologies … why not AI.
Just as students shape AI, the technology also exerts an influence on them. The algorithms in generative AI software might introduce students to new styles of writing or new ways of structuring arguments. The real magic happens when students take these new approaches and make them their own, incorporating them into their own unique style and perspective. This is not just a one-way flow of influence from technology to human but a mutual shaping of capabilities and potentials.
This is what I call Cyborg writing.
Cyborg Poetry
Let me show how I’ve been thinking about this in the world of poetry.
My love for language and interest in rhetoric started with Surrealist poetry. Using language and technology to come up with new ways of seeing.
In the early 20th century, Surrealism emerged as an artistic movement that radically transformed our understanding of art, language, and reality itself. Surrealists, from Salvador Dalí to André Breton, broke free from the constraints of rational thought to create alternate realities that defied logic. They were innovators not just in what they said but also in how they said it, exploring new ways of seeing through inventive uses of language and pioneering technologies. One classic example is the "exquisite corpse," a collaborative drawing or writing game that produced unpredictable, dream-like sequences.
The "exquisite corpse" is a collaborative drawing or writing game popularized by the Surrealists. In this game, a group of individuals create a piece of art - be it a picture or a piece of writing - without any one person having knowledge of the whole at any given time. For example, in a drawing, one participant might start with a sketch, then fold the paper to hide part of their drawing and pass it on to the next person to continue. The idea is to produce unpredictable, dream-like sequences that defy logic.
As a technology for creating new ways of seeing, it allows for the breaking down of conventional thought patterns and invites the participants to see how different ideas can connect in unexpected ways. Its collaborative and unpredictable nature stimulates creative thinking and encourages experimentation. Each participant adds their own unique touch, resulting in a piece that could not have been created by a single individual alone.
This is one way to approach the use of generative AI in the writing process.
Drawing inspiration from Surrealist techniques, I have found that artificial intelligence can serve as a powerful ally in pushing the boundaries of language and form. By incorporating the principles of Surrealism, I encourage students to see AI as a tool for breaking free from conventional thought patterns and exploring new avenues of expression … not for taking over their ways of writing.
Since I first encountered AI in 2020, I’ve been exploring how to use AI for poetry in these ways, usually posting the results on twitter. For example, I might generate a list of seemingly random words—'Candle,' 'Dental,' 'Trail,' and so on, which then becomes the raw material for a collaborative poem, co-authored with the AI. I write lines of poetry, but I also generate new lines of poetry and new ideas for metaphors by hitting the generate button over and over again.
Do I cut and paste poetry from AI? No.
I pick and choose which lines or images I like and incorporate it into my working poem. AI isn’t writing for me … it is opening my mind to new possibilities and more writing choices.
Instead of viewing these words as random or disconnected, I saw them as opportunities to create something new. The resulting poem was a product of both AI and human creativity, each feeding into and enriching the other in an ongoing dialogue.
Posthumanism and the New Classroom Dynamics
This experience serves as a perfect illustration of posthumanist theory in action. In a posthumanist classroom, AI is not merely a tool to be used; it is an active agent in a complex network of relationships that includes human students, educators, and even the very concepts being taught. This perspective radically changes how we approach education, particularly in subjects as deeply human as creative writing.
We are always working out some kind of exquisite corpse.
By incorporating AI as a creative partner, we're teaching students not just how to use technology, but how to co-exist and co-create with it. This is especially relevant today, as AI technologies become increasingly integrated into all facets of human life. It's not just about learning to adapt to new technologies but understanding how these technologies are shaping and being shaped by human agency.
So, in a world where the boundaries between human and machine are increasingly fluid, the posthumanist approach offers a nuanced framework for understanding these complexities. It prepares students for a future in which they are not merely 'using' AI but actively engaging with it as a co-creator, co-shaper, and even a co-learner.
Consider a classroom setting where students are asked to co-write an essay with a generative AI tool. The assignment isn't just to produce a well-written essay but also to document how the AI's suggestions influenced their writing process, and vice versa. Students might find that the AI pushes them toward more formal language, or offers structuring suggestions they hadn't considered. Conversely, the students have the option to reject or modify AI suggestions, thereby providing data that could influence the AI's future behavior. The end result is a co-created piece of work that is a testament to the collaborative agency of both student and AI.
In this posthuman paradigm, the classroom becomes a dynamic ecosystem of interconnected agents, each exerting influence and being influenced in turn. The goal is not to master technology but to enter a symbiotic relationship with it, one where both humans and technologies are continually learning from each other.
This approach opens up new possibilities for how we think about education, technology, and the act of writing itself. It prepares students for a future where the boundaries between human and machine are increasingly blurred, and where the ability to navigate these complexities is a critical skill.
By adopting a posthuman perspective, we can cultivate a more nuanced and collaborative approach to generative AI, one that acknowledges and leverages the mutual shaping of technology and humanity.
How Religion Impacts My Approach to AI
Before we delve into the nuances of AI and its ethical landscape, I'd like to offer a brief introduction to how my Eastern Orthodox views align with posthumanist thought. Unlike humanists who see humanity as complete unto themselves, my perspective, rooted in Eastern Orthodox teachings, posits that we are inherently incomplete beings. I am not a complete person in and of myself; my completeness comes from my relationships—with other people, with God, and yes, with technology.
Human nature isn’t static or unchanging. We are in a constant state of change precisely because we are perpetually in varying states of relationship.
Consider how the pandemic shifted our modes of interaction. We relied on technology to maintain relationships, work, and even worship. In this sense, technology completed us, filling a void that physical distance created, and altering our state of relationship with the world around us.
While we may be incomplete, it's essential to remember that humans are unique creatures. We're not simply programmable machines. Our uniqueness arises from the choices we make as rational and moral beings—qualities that technology like AI currently lacks.
We live in networks. Posthumanists love contradictions, paradoxes, and multiple perspectives. Consquently, so do Eastern Orthodox mystics.
You may not believe any of these statements. You might even find contradictions with what I've already laid out. But that's okay. That is what it means to be human. We live in contradictions, paradoxes, and multiple perspectives all at once.
We live in complex networks. Posthumanists love contradictions, paradoxes, and multiple perspectives. Consquently, so do Eastern Orthodox mystics.
With this foundational understanding, let's explore my core beliefs about AI, interpreted through my Eastern Orthodox lens.
Belief 1: AI is not human, but it is a part of our humanity.
In Eastern Orthodoxy, the understanding of humanity is deeply rooted in the concept that we are made in the "image and likeness" of God. This endows us with unique attributes like rationality, freedom, and moral responsibility. AI, while an extraordinary tool birthed from human ingenuity, lacks these divine attributes. It cannot possess moral responsibility, nor can it make ethical or rational decisions in the way humans can.
However, this does not mean that AI exists in isolation from our humanity. Quite the opposite. AI serves as a mirror, reflecting both our virtues and our vices. It embodies our quest for knowledge and our ability to create, serving as an extension of our God-given rationality. In this sense, AI is not separate from us but a part of our collective human endeavor. It is a part of our humanity, even if it is not human itself.
In the Eastern Orthodox view, all creation is interrelated and interconnected. AI, as a product of human creation, is therefore a part of this cosmic tapestry. While it isn't human, its ethical and moral impact falls entirely on human shoulders. The choices we make in developing and deploying AI are a testament to our moral character and spiritual state.
By this understanding, AI could be seen as a new frontier in our continual journey toward "theosis" or "deification." In Eastern Orthodoxy, theosis is the transformative process of becoming more like God, of fulfilling our divine potential through virtuous living and communion with the divine. The ultimate goal of human life is to become deified, in a sense, to become more aligned with God’s will, sharing in His divine attributes while remaining distinct from Him.
Its ethical use can help us become more virtuous, drawing us closer to the divine, while its misuse could steer us further away.
Belief 2: AI is not good or evil; it just is. There are just right and wrong uses of AI.
The Eastern Orthodox tradition posits a nuanced understanding of good and evil, especially as it relates to human action and moral choice. Just like technology itself, AI is neither inherently good nor inherently evil; it is a tool created by human beings. Its ethical value lies not in its existence but in how it is applied. AI becomes an instrument for either virtue or vice based on human choices and the context in which it is used.
In the field of Rhetoric, similar principles apply. As a rhetorician, for me there's no such thing as inherently good or bad rhetoric; rather, there's only the right use of rhetoric. In educational settings, we don't teach students to use 'good' or 'bad' language, but to use language correctly and effectively to achieve specific goals while being mindful of ethical considerations.
This concept parallels the Eastern Orthodox view on the right use of technology and AI. Just as students are trained to use rhetoric in a way that is ethical, purposeful, and aligned with their objectives, we should also be training ourselves and others to use AI in a manner that is ethically responsible and aligned with broader human and divine goals.
This ethical instruction becomes especially critical as AI continues to play a more significant role in various aspects of our lives, from healthcare to governance. The goal is not just to teach technical proficiency but to instill a sense of moral and ethical responsibility—the 'right use' of AI, so to speak.
This perspective aligns well with the Orthodox concept of "orthopraxy," or correct action. Orthopraxy emphasizes the importance of ethical conduct and the 'right use' of one's beliefs in everyday life. In the context of AI, this means considering not just what can be done with the technology, but what should be done.
Belief 3: Right Use of AI Can Open Up New Ways of Seeing the World
The purpose of religion, particularly from an Eastern Orthodox perspective, is not to narrow our view of the world but to expand it. It's not about dispensing with mystery and ambiguity but embracing them as avenues to divine wisdom. This is why Eastern Orthodoxy respects the divine wherever it is found—whether in another person's religion, their culture, or even in the intricate processes of AI.
When we engage with AI responsibly and ethically, we are not just using a tool; we are participating in a form of "living theology." We are extending our capacity to see the world in new ways, to interact with it through new paradigms, and to deepen our understanding of the divine. This is why my religion is incomplete without the religions and perspectives of others; the divine is not limited to one tradition, one culture, or one way of thinking.
This aligns closely with the idea of "right use" in Eastern Orthodox thought and in rhetoric. As a rhetorician, I advocate for the "right use of rhetoric"—not merely effective use, but ethical and purposeful use. Similarly, AI should be used not just effectively but ethically and meaningfully. It's not about AI replacing human abilities or values but augmenting them, enriching our collective experience of the world, and bringing us closer to an understanding of the divine.
In the end, life is about interaction—with people, processes, things, and ultimately, the divine. Through responsible and ethical interaction with AI, we have the potential to enrich these interactions, opening ourselves up to new ways of seeing the world and new ways of being in it.
An Invitation to Dialogue
I've shared these beliefs not to convince you of anything, but to provide a lens through which you can understand why I engage with AI the way I do. My intention is to provoke thought, to challenge, and to invite dialogue.
And you might be wondering, "Did Dr. Cummings use AI to write this keynote?"
Before you rush to deploy the nearest AI detection tools (which don’t work, by the way), allow me to put your curiosity to rest: Yes, I did. This keynote is almost entirely machine-generated text.
Yet, these are still my words, informed by my beliefs, my ethics, and my ways of thinking.
To generate this keynote, I provided AI with over 3000 words of my own content and drafted this keynote through 3 hours of dialogue with ChatGPT plus.
This keynote would not exist without me … nor would it exist without AI. This keynote is a cyborg.
This is also a personal invitation—a call to engage in an open and meaningful dialogue about how our attitudes, preconceptions, and beliefs shape our interactions with AI.
In acknowledging this, I hope to prompt you to question not just the technology but the philosophy and ethics that guide it. After all, AI is not an isolated entity; it's a part of our ever-evolving relationship with the world, shaped by our collective beliefs and actions.
I eagerly invite you to join this dialogue. Because, in the end, our approach to AI is not just about algorithms and data; it's about humanity, morality, and perhaps even the divine.
By the way, I tested out Turnitin’s AI detector for the first time. Turnitin thinks this is only 23% machine text. It is, in fact, over 95%.
Conclusion
In sharing these beliefs and perspectives, my aim has not been to dictate a universal truth, but to invite each of us into a deeper consideration of the complex relationship we share with technology, and specifically, AI.
Just as my Eastern Orthodox faith informs my approach to AI, your own cultural, religious, or philosophical frameworks might offer you unique insights into how we should interact with this ever-evolving technology.
And, yes, we all can learn from each others’ unique perspectives.
I believe we are incomplete beings, constantly seeking completion in relationship—with other people, with the divine, and yes, with technology. AI is not separate from our humanity; it is a mirror reflecting our ambitions, our ethics, and our collective future. It holds neither intrinsic good nor evil but exists within the moral and ethical context we place it in.
In this room, we have a mosaic of worldviews. By sharing and contrasting our beliefs, we enrich our collective understanding and, perhaps, come closer to the responsible, ethical use of AI that respects human dignity and divine purpose.
I used AI to write this keynote, but the beliefs and attitudes it expresses are my own. In that sense, the technology has become a part of my rhetorical strategy, my teaching philosophy, and perhaps even my spiritual practice. It's an extension of my humanity, and it invites dialogue.
As we move forward into a future increasingly intertwined with artificial intelligence, let's not shy away from the difficult questions or the moral implications.
Let's embrace the complexity, the contradictions, and even the paradoxes. Because it is within that intricate web of human experience that we find the most profound opportunities for understanding, growth, and communion—not just with each other but with the divine spark that resides in all aspects of our lives.
Thank you, and I look forward to your insighful comments and questions.
Beautiful work! As I read, I caught myself thinking that the professor speaks better than I write: no word parasites -- clear and concise instead. Ha ha ha! I thought it was just a transcript of your dialogue with someone.😇
In any case, it is obvious to me that a lot of work has been done, at least creativity and analytical mind allows to create such a thing with AI.🙏🏻👏
I can work on a post on this, but I find it hard to describe. Probably the most important detail is that I use ChatGPT's code generator. I find that it does a much better job analyzing content and following instructions ... thought stylistic it is not the best.
But here is an outline:
- I start with a structured prompt that clearly outlines the context and goals of the writing.
- I upload relevant content that I've already written.
- Work with ChatGPT to agree on an outline.
- Then draft on section at a time, telling ChatGPT how to extend each section
It eventually forgets what it is doing and I have to collect all the pieces myself ... but the speed at which you can compose something like a keynote is pretty amazing.
I'm think of this kind of like having my own speech writer. Joe Biden doesn't write his speeches (most likely), he tells other people how to write them. 😆