Tools change their users. Consider the cooper with a hunched back from leaning over his barrel. The guitarist’s fingers adapt to the flexibility required. Some tools shape more and others less. The impact on our bodies is often visible sooner than the impact on our minds. When considering technology, we must be alert to how these tools are shaping not only our environment, but also how they are changing us, especially when it concerns the process of education.
In the recent book Are We All Cyborgs Now? Joshua Pauling and Robin Phillips offer a copious collection of essays, covering areas of politics, education, lifestyle, AI, AR/VR, social media, TV, and smartphones. The book is important because technological development around AI and AR seems to be accelerating, and the default conception of all technology as “just another tool” is inadequate to provide an answer. Are we all Cyborgs Now? is a thorough blend of philosophy surrounding technology–both from the techno-utopians and the Luddites (which should not be used as a slur for the wise)–and of practice about use.

Are we all Cyborgs Now? by Robin Phillips and Joshua Pauling
The underlying concern of Phillips and Pauling is that technology, especially in the digital age, has so encroached on our lives that it crowds out aspects of our humanity—both in how we function and how we relate to each other. The machine threatens to take over our lives unless we consciously resist assimilation. Most helpfully, the book not only contains philosophical consideration of technology, viz., humanity, but also helpful strategies for warding off assimilation to the Borg. Educators, parents, and individuals will find action steps for escaping the digital cage. Create “sanctified space” away from technology–make your school a “phone-free zone.” I would recommend going even further in banishing technology from the regular pace of the classroom. Recover the common arts. The tactile nature of these arts connects us to reality and the way life actually is. Cultivate “focal practices” which bring people together around a shared experience. Slow down and enjoy the task for its own sake, not just the final result. Enjoy the three-hour process of cooking a meal, instead of taking shortcuts to get to the table.
AI promises that it will augment our abilities—more efficiency, production, and success. The danger is that it will lessen our humanity. We become less able to think, feel, or love. Nicholas Carr’s The Shallows describes the neuroplasticity of the brain and its rapid adaptation and conformity to the stimuli it is presented with. “Chatting” with an AI, for instance, is a radically different experience than reading a long, sustained argument in a book or engaging in dialogue with another human being. Our technology always changes us, something even Plato warned us about long ago.1 Approaches to tech that only treat it as a neutral tool are insufficiently dealing with reality. Before bringing AI into the classroom, therefore, we must consider how interacting with AI gives a particular cast and color to education.
Education and AI — What is Knowledge, and What is Teaching?
In the fifth section of the book, the authors consider AI, which may be the most misunderstood of modern technologies and therefore the most important to grasp for classical educators.. How are we to approach teaching in light of the pervasiveness of AI, even in internet searches? Can it be used well for that activity we call “research”? What about writing? Ought we to spend time teaching students how to use AI well? What if they fall behind?
Phillips helpfully begins his chapter, “ChatGPT in the Classroom,” by reminding us what writing is for. Writing is not merely a post-activity assessment to see if students have acquired the content—a merely performative exercise in summary—but an “exercise for facilitating growth.”2 Phillps writes,
Consider that good writing, especially essays organized according to formal structures, can help you become a certain sort of person. This is because writing well can assist towards the traits constitutive to a well-ordered soul, such as critical thinking, epistemic virtue, attentiveness, linear cognition, creativity, and clarity of ideas.
That is, writing shapes us. We don’t always know what we think at the beginning, but by the difficult exercise of getting words to paper, we start to shape our ideas. They acquire density and weight. What may have been a foggy haze emerges from the mist as a distinct entity, features gradually revealed in sharper clarity. Students often forget that we do not actually know what we want to say until we have gone through the work of saying it. This is even before the work of editing and rearranging; this evaluative process further sharpens the mental acumen of the writer. He must discern good from bad, preserving only what is superlative in his writing.
We rob students of skill and ability when writing is outsourced to chatbots. If writing is thinking, and if a generation is coming who sees ChatGPT as no different than Grammarly, then we will soon have a generation that not only cannot write, but which cannot think. An MIT study recently found that users of AI became worse off than those who never used it.3 Their brains actually atrophied in the ability to make connections or compose ideas within just four months of AI use. This is more than staying undeveloped or not progressing; it is moving backward.
“Editing and refining your ideas (perhaps going over the same passages again and again to improve them) is part of the humbling process of a sophisticated mind.”4 Phillips rightly warns that a post-writing movement would bring us into a post-thinking crisis. Again, Carr’s The Shallows is instructive. The printing press did affect our thoughts.5 It forced mental clarity from long passages of reading and writing. It pushed these habits into the populace beyond the general elite. AI will change our ability to think, reason, and write, but it is much harder to see how this technological change is an improvement. Rather, it encourages short and brief reflection, instant recourse to an “all-knowing” chatbot, and the inability to process information.
Beyond the ludicrous speed at which AI can sort and process information, we can move quite easily into the realm of generating criticism, opinion, and judgment, approximating what Aristotle would likely recognize as the first of his canons of rhetoric: Invention. Some have lauded this, but I would argue that AI shouldn’t be used for invention either. The practice of examining each common topic as it relates to a thesis, searching, formulating, and crafting the best and most effective arguments, is instrumental to thinking quickly and clearly. We do ourselves and our students a disservice when we encourage AI for the initial research and invention of arguments. Perhaps here is another point in favor of Dr. Chris Schlect’s encouragement to replace the thesis with declamation.
But what about AI for research? Leaving aside the current issues with accuracy, supposing that AI could reliably provide sources and citations, would it be acceptable to use AI in research or invention? To answer yes is to misunderstand the difference between knowledge and wisdom. As Phillips points out, we are drowning in information. Our plight is not to find information or facts but to be able to discern reliability and truthfulness. He references the Soviets, who did not try to suppress information but demoralized their victims so that they could not believe the truth even when confronted with it.
Likewise, the problem for our students is not that they have no access to information but that they have too much of it. Supplying them with an AI to answer every question that pops into their head isn’t developing the virtue of discrimination, forethought, or quiet meditation. It is the same issue with smartphones multiplied one hundredfold. We no longer have a moment of quiet contemplation to be alone with our thoughts. This creates shallow thinkers with shallow lives, only capable of expressing the shallowest of thoughts.
With all that said, one might follow Phillips’s conclusion that we ought to be teaching wise tech habits. This is something that one will likely hear frequently from parents: “We need to be teaching tech and media literacy,” or “They’re going to have to use it anyway, might as well teach them how to use it well.” The list goes on. But Phillips cites a good response in C.S. Lewis: “Good philosophy must exist, if for no other reason, because bad philosophy needs to be answered.” He argues that we should teach students to ask questions about the proper use of AI for information retrieval, evaluation, behavior, and habits. The sentiment is admirable, but I fear even this may be too glib.
If we are taking the quote by C.S. Lewis seriously, then it seems not to be a justification for incorporating AI into the classroom, but rather an argument, in this context, for teaching “good philosophy” that resists assumptions about the technological imperative. To help with our response to this new tech, Pauling attempts to steelman the approaches of the Amish and Luddites to technology. He notes that the Amish approach technology in three ways: “they reject, they accept, they bargain.”6 Rather than a reflexive recoil away from technology as if it struck their patellar tendon, they prioritize community. They consider the effects a technology may have and approach it accordingly. Thus, owning a car causes a greater rift in the societal fabric, so they reject ownership while permitting the use of a vehicle for other specific tasks.7 Likewise, the Luddites rejected technology that “they perceived as negatively impacting themselves, their communities, their livelihoods, and the quality of their products.”8
Following this model, we should not rush too quickly to accept this technology and assume we must teach its proper use before we bargain or refuse to find out its proper role. We likely need to pause and slow our embrace of AI as inevitable. The problem with new technologies is that we often don’t even know what questions to ask surrounding our use of them and their adoption into social norms. This is why a classical Christian understanding is so necessary; it answers from a place of tradition and from the wisdom of the past.
Phillips and Pauling have thought deeply about this. But many fail to understand the nature of technology in general and this technology in particular. Technology is not neutral. There is no such thing as utopian technological progress. The comparisons to the printing press are not apt for our consideration of AI. As Phillips points out later in the book, technology can serve several purposes: it can extend man’s strength, man’s intellectual capabilities, aesthetic potential, or restore man’s health and lessen the impact of death.9 There seem to be applications of AI that could benignly expand man’s abilities—compiling a bibliography from a photograph of books, fixing formatting and citation issues, identifying a part number from a picture—donkey work. But far too often, the atomized, excerpted character of quick AI answers stunts our abilities and growth. Rather than laboring through a text, wrestling with its syntax and ideas, students (and parents alike) are tempted to rush for immediate answers. It’s like calling an Uber the moment a 5k makes you sweat.
AI cheapens knowledge and misses the point of education. It forces us to treat education as mere information transfer. From this perspective, AI is so much better as a tutor because it gets the information out faster. Why read a book? Or, you are a busy headmaster and don’t have time to read anymore? Just have an amicable AI bot summarize it for you. But AI cannot practice virtue, particularly the virtue of prudence.
Most importantly, as Phillips and Pauling make clear in their book, AI cannot love. Any view that takes AI as a tutor will necessarily truncate education to the least important elements—knowledge and techne. This reduces education only to Aristotle’s “material” and “efficient” causes, not to the needed “formal” and “final” causes.
If education is the teacher loving something valuable in front of his students, then AI fails. If education aims to impart wisdom and virtue, AI cannot pass on something it cannot possess. That we are even asking the question reveals how truncated and inhumane our vision for education has become. The modern educational system really is about manipulation and data downloads. And while we, in the classical renewal lament that it should not be so, how often do our epistemological assumptions fall into the same modern pattern of thought?
True education will always require flesh and blood teachers to embody the virtues before their students and to challenge them in their particular context to grow in their capabilities. Teachers have limited time with students. Rather than spending a good deal of time on wise tech habits, teachers would do better to focus on moral prudence and the judgment of ideas. If students learn these principles well, they will be able to apply them to their own technological practices, perhaps with some explicit help along the way. There is a tipping point where sex education ceases to be a benefit and actually becomes an incitement to vice. We don’t need to bring demons into the classroom to teach exorcism. In the same way, let us take care not to reduce wisdom down to some practical dos and don’ts around technology.
Even if the assignment permits, or if there is no “assignment” at all, using AI in any form to invent, arrange, or style writing is cheating. Consider a simple litmus test: ask, “Would I use Wikipedia for this?” If it is some kind of quick factual lookup—When was Anselm born? What was the Battle of Edington?—an AI search engine is fine. If it is to read a general summary of a topic to find where you need more research, again, this is likely acceptable. But the moment we ask AI to supply arguments, data, or support, we are relying on it too much, surrendering activities that are meant for us to do. The reward for refusing the light or ability we have already been given is more darkness, more blindness. Classical education should be about training the faculties of the mind, not crippling them. “We become like what we worship” is a maxim we would do well to remember.
Transhumanist Roots of AI
One negative impact of AI not often discussed among hoi poloi is its explicit connection to transhumanism. AI is one of the four pillars of transhumanism—the movement that sees humanity as not a fixed or sacred reality but merely as a phase, a transition in the grand evolutionary march towards deification. In the words of Philostrato, humanity is “the ladder we have climbed up by, now we kick her away.”10 Along with genetic manipulation, robotics, and nanotechnology, it reiterates the same ancient promises: you will not die. You will be as gods. Listen to how Sam Altman describes his OpenAI project:
The merge has begun—and a merge is our best scenario. Any version without a merge will have a conflict: we enslave the A.I., or it enslaves us. The full-on-crazy version of the merge is that we get our brains uploaded into the cloud. I’d love that. …
We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever. What a time to be alive!11
Enslave the AI, or it enslaves us—level up humans. Merge with AI. You will be as gods. As Ray Kurzweil has said, “We’re merging with these kinds of computers and making ourselves smarter by merging with it.”12 Tristan Harris reported a goal of AGI was the “inevitable replacement of biological life with digital life.”13 Elon Musk aims “to achieve a symbiosis with artificial intelligence.”14 When considering the place and use of AI, we must understand that those developing it are not seeking a tool to aid human flourishing, but an entity to replace humanity. Whether they could actually achieve their vision of ascending into the heavens is irrelevant; AI is a human replacement project.
Whatever benefits AI promises, it is traveling in the wake of the transhumanist ideology. While the average user may not dream of merging with AI, uploading his consciousness, and endlessly replicating in digital or cybernetic form, he still embraces the promise that he can transcend human limitations. The promise is to become more efficient. Have access to all knowledge. But AI’s promises ultimately fall flat.
We are humans and are meant to act like it. We have limits of time, space, knowledge, ability, and intellect. If I said that we could bring back voices from the past and ask them questions about the present, some would rightly accuse me of necromancy or witchcraft. But run the works of John Calvin or Thomas Watson through an AI to have a conversation with them, and it’s suddenly fair game. This may sound like a tenuous leap, but there is a connection. A New York Times article notes the surprising popularity of using AI to talk to dead loved ones.15 How is animating old photos or creating videos of dead loved ones not an abomination? Let the dead remain dead. We need not indulge in superstition or magic to try to bring them back.
As Phillips and Pauling suggest, many of the purported blessings of AI are thinly disguised attempts to transcend human limitations. Maybe we weren’t supposed to have access to every book ever created. Maybe knowing three great books extremely well is more beneficial than AI summaries of all human knowledge. We need to reject the shallow thinking of “green line must go up” and embrace the human scale. The four technologies mentioned above are converging to make man something other than man. “Human nature,” Lewis warns, “will be the last part of Nature to surrender to Man.”16 And when the transhumanist dream is complete, “Man’s final conquest has proved to be the abolition of Man.” We are rapidly being presented with the choice to add these technologies to our person (and become something less than human) or try to live in a world in which other cybernetic beings are far outpacing our natural abilities.
It is the same promise to be as gods and escape death. Technology promises to transcend human limits and make us more than humans. Cyborgs are supposed to be better in every capacity, but they are not human. Such is the paradox described by Boethius and many others. When man seeks to become more than he is, to live as god, or clings too tightly to things beyond their measure, he lowers himself to the level of a beast. Technology promises liberation, but it enslaves. It promises greater capacity, but lessens our ability to appreciate the good in life. Technology makes commodities abundant and easy, but destroys our ability to enjoy them. Man desires above all to be a friend of God. But unable to ascend to the heavens, he makes a god in his own image, trying to fulfill “an emotional desire to meet and speak to the most intelligent entity they’d ever met.”17
In the end, AI becomes the same as the idols of old. We fashion a god out of metal and silicon, then bow down and worship it. We must heed the biblical instruction that all those who worship idols become like them—blind, deaf, mute, impotent. Are we not already seeing this reduction in capacity by the users of AI? We have more summaries and information than ever before, but are less able to use judgment—the moral art of prudence.
In St. Paul, we have the dual instruction: we know an idol is nothing (1 Cor 8:4), but that we also ought not to eat food offered to idols lest we commune with demons (1 Cor 10:20-21). That is, the physical statue of wood or stone is nothing, and there is no God but the Triune God. Yet even so, demons and spiritual beings can be considered “gods” in a lesser sense and may use the physical object as a vehicle or locus for their presence and influence. These “gods” may masquerade as the gods of the heathen, receiving their worship and sacrifice. And Paul thinks that there is a danger of a real, spiritual participation and communion with them through idols.
Might AI present a similar opportunity for spiritual forces to manifest themselves? Some prophetic voices, such as Paul Kingsnorth, have in recent years made that argument quite persuasively.18 Much like the head in That Hideous Strength, the scientists think it operates only on correct pressures and fluids, but those really in the know march right in without sterilizing or turning on the machinery. The technocrats believe they are creating a super-intelligence. Might they merely be the vehicle for something beyond their own creation? While we can say that AI is nothing but a computer program, might it also serve as a vehicle for the demonic?
In his survey of the Bible’s description of technology, Phillips describes the ancient near east and its use of automatons, which became vehicles for higher intelligence.19 He then cites researcher Nick Hinton, claiming, “Artificial intelligence isn’t actually artificial. Rather, these are invisible intelligences who communicate with us digitally. Technology is just the channel they use for reaching us.”20 Communicating with supernatural powers through technology is not an isolated experience. We must be exceedingly careful in adopting these technologies, lest we partake of the cup of demons.
Phillips and Pauling’s book touches on many areas of interest and importance for those considering technology and its impacts. Even though not every locus may be a topic of direct conversation with students, a broader view of technology ought to inform the school’s policies and stance towards tech at all levels. It seems increasingly clear that “Luddite” ought to be worn as a badge of honor for the classical educator.
Featured image used courtesy of Pavel Danilyuk
Notes
- See the Old Voices section in this issue. ↩
- Robin Phillips and Joshua Pauling, Are We All Cyborgs Now?, (Basilian Media Publishing, 2024), 317. ↩
- Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task,” arXiv, August 12, 2025, https://arxiv.org/pdf/2506.08872v1. ↩
- Cyborgs, 318. ↩
- See Marshall McLuhan’s The Medium is the Message. ↩
- Cyborgs., 49. ↩
- Ibid., 50. ↩
- Ibid., 53. ↩
- Ibid., 418 ↩
- C.S. Lewis, That Hideous Strength, (New York: Macmillan Publishing Co., 1946), pg. 177. ↩
- Qtd in Are we all Cyborgs now? pg. 148. ↩
- Ibid., 149. ↩
- Ibid., 156. ↩
- Ibid. ↩
- Rebecca Carballo, “Using A.I. To Talk to the Dead,” The New York Times, December 11, 2023, https://www.nytimes.com/2023/12/11/technology/ai-chatbots-dead-relatives.html.↩
- C. S. Lewis, The Abolition of Man (HarperCollins, San Francisco: 1995), 77. ↩
- Josh Schrei, quoted in Are we all Cyborgs?, 125. ↩
- See Paul Kingsnorth’s feature in Nov/Dec issue of Touchstone magazine in 2023: https://www.touchstonemag.com/archives/article.php?id=36-06-029-f. ↩
- Cyborgs, 422. ↩
- Ibid., 444. ↩
