In Blade Runner, Ridley Scott’s 1982 adaptation of Phillip K. Dick’s Do Androids Dream of Electric Sheep?, people have a hard time deciding whether they’re conversing with robots or other people. In this alternate version of 2019 California, you need a person with specialized training, the eponymous Blade Runner Rick Deckard, himself balanced on the knife’s edge between man and “replicant,” to detect genuine personhood in something that for all intents and purposes looks and acts exactly like a human being. Among other things, Deckard uses an examination called the Voigt-Kampff test, a battery of questions meant to gauge empathic and emotional reactions to different unsettling scenarios. The nature of the Voigt-Kampff test entails that there is no clear-cut way to determine whether you’re listening to a human or an android. A good Blade Runner must rely on their gut just as much as their AI detector.
We Are All Blade Runners Now
That is, of course, the relevance to our own situation as educators in our own real-life AI dystopia. The tools don’t reliably work, most of the population lacks the training to distinguish between a person and a product, and we’re all at the mercy of increasingly powerful corporate entities with a vested interest in preventing us from telling the difference.
We are all Blade Runners now. We teachers find ourselves, just like Harrison Ford’s Deckard, sitting at a desk and searching for a soul, trying to sense whether there was ever a person on the other side of the page we’re reading. Some of us (I seem to be one) have developed a facility for that process. We feel some inchoate sense of absence in what we read, but exactly how and why can often remain a mystery.
AI, it should be increasingly stressed, cannot think, understand, or decide or imagine, though it can give a powerful illusion that it is doing so. What was once a thought experiment broached in the 70s among philosophers of mind, the philosophical zombie, has now become a real and very pressing issue. A philosophical zombie is a being that has no consciousness or actual self, yet behaves indistinguishably from a being that does possess such qualities. If you tell it a joke, it will laugh; if you pull its hair, it will yelp. If you ask it what mental experiences or emotions it is having, it will give you an answer. But there is nobody behind these responses. This is the situation we face with generative AI.
There are alleged signs of AI-generated writing – the overuse of em-dashes, such as these, and reliance on particular verbs like “delve” – but it is not always possible to pick up on them. And as AI becomes increasingly more devious, such markers become simply targets in the war of escalation between chatbots and chatbot detectors. More on this below. I am interested here in asking whether we can refine that gut-level response to soullessness, preserving a qualitative sense of the humanness of writing without reducing it–mechanically–to quantitative flagging of particular attributes.
There is a method to this, but it requires a literary ear honed by listening to real people expressing honest thoughts.
Fortunately, AI currently has a weakness that philosophical zombies don’t. Text-based AIs are large language models (LLMs). They are trained on a particular set of data, analyze it, and then generate new responses by extrapolating the statistical likelihood of word chains from their datasets. Even their makers are not sure how they can do what they do (a terrifying thought), but for our purposes, we can think of programs like ChatGPT as incredibly powerful versions of the predictive text features that you use in your messaging apps. In short, and perhaps at risk of oversimplification, an AI is programmed to spit out the most expected words in the most expected order.
This is why reading AI responses feels so straightforward and easy. It will rarely surprise you in its choice of adjectives, its use of striking imagery, or its deployment of a novel metaphor. Instead, we experience a sensation of dullness and smoothness. Our eyes glaze over the words, and our brains switch into a comfortable passivity. Rarely do we find ourselves jostled by AI writing. The real world is spiky and unexpected, and real people’s thoughts are not always clear to others. Sometimes the sentence is awkward, and the adjective is not merely surprising but clunky. Therein, nevertheless, lies its life.
How do real people write? Can we reduce Ruskin’s prose style to a statistical map of his word distributions and sentence lengths? Can we imitate Austen’s dialogue by analyzing and tagging topics and themes? To an extent, perhaps, but not truly. As with most people who have developed a real competency for writing, the best way of training is by reading a whole lot of stuff–creating what we call an ear for language. Human style is an emergent phenomenon, arising from but not reducible to particular rules of grammatical construction and selection. Students have a style too: one which is in the process of development, and with which experienced teachers become quite familiar.
The LLM-zombie is just off, somehow, and merely because we cannot point to a particular feature, that doesn’t mean we can’t tell. Perhaps, after all, when we say that AI writing is soulless, that’s exactly what we’re discerning. There is no spiritual communion with another mind through these words.
That’s hardly going to hold up in an academic dishonesty meeting, I expect. The blade runner’s gut might tell them where to press harder, but without a knowledge of the capacities of current AIs and the ways to test whether a student’s work is their own–like calling suspected cheaters to your office for an impromptu oral examination–it remains insufficient. In our war against machines that their makers prize as being indistinguishable from real people, we need to know our enemy very well indeed.
AI is a Predator
“War” is the right word, I assure you. I need not rehearse the rise and triumph of AI, or the way in which educational institutions and companies have enthusiastically adopted it, fearing being left behind. It is now directly integrated into Canvas, for instance. But you may not have kept abreast of all of its developments, or how specific models now actively prey upon students.
It was once the case, in the ancient past of 2022, that AI could not generate academic citations for the content it produced. Now it can. Some of those citations are hallucinated (i.e., made up), but many are not, especially if you specify which ones to use. It was once the case that a middle-grade student would turn in work that was clearly far beyond their own capabilities. Now, you can adjust the syntax to fit your age and grade level. It was once the case that teachers could spot that an assignment did not match your unique writing voice. Now, AIs train themselves on your past writing and adjust their vocabularies accordingly.
To a greater or lesser extent, these “advancements” are pawned off as helpful and constructive. Caktus AI’s front page ensures students that they can “Craft A+ essays in seconds,” and “Get instant answers to any question.” It is deliberately packaged to play on the anxieties of students and portray itself as a compassionate helper. “Stop stressing over deadlines and complex topics,” it reads. “Caktus AI is your all-in-one platform, trained on millions of academic papers to help you research, write, and learn faster than ever before….Unlock your potential.” Compare The Homework AI’s evident concern with human dignity, sadly wasted by teachers assigning effortful work: “Your time is valuable and we care about it,” they tell us. Never mind that this focus on reducing time also reduces education to the exchange of inputs and outputs, and that whatever metaphysic underlies this sort of potential is clearly an incoherent one.
Every one of these software is an exercise in hypocritical doublespeak meant to soothe consciences while stoking the impulse toward instant gratification. “Is it safe to use Caktus for schoolwork?” the website’s FAQ asks. The answer? “Yes. Caktus is designed to be a powerful academic assistant. We encourage using it to generate ideas, create outlines, find sources, and understand complex topics. Our Humanizer tool also helps ensure your final work is undetectable by AI checkers.” It is safe, then, not because it is ethically unproblematic, but because they promise you won’t get caught.
StealthGPT assures us that it has no intention of helping you cheat. “Stealth AI is designed as an AI writing assistant to help improve text quality and enhance readability. It is not intended for academic dishonesty or circumventing plagiarism detection systems.” Right above this disclaimer, however, lurks the promise, “Write undetectable content with citations in our robust essay writer.” It’s called StealthGPT, after all. The entire brand is predicated upon deceiving readers into thinking you’ve done the work yourself. “We beat every AI detector,” they boast, along with a long list of anti-plagiarism software that they have bypassed. This is endemic to the industry. Ryne.ai enthuses, “Bypass AI detection easily (yes, Turnitin too!).” It serves over 2 million users.
Again (and note the misplaced comma), “StealthGPT is completely opposed to students using AI writing to cheat, [sic] our service is meant to be used in line with the limits of the school which any of our potential users attend. Plagiarism isn’t a concern as all text is unique and not taken from other sources unless prompted to do so.” Used within the school’s limits, but deliberately designed to evade the school’s detection software. And, good news! This isn’t plagiarism because you aren’t taking your content from other sources, except, obviously, from StealthGPT.
Lest you feel comforted that the grammatical error indicates a real person wrote the website copy, note that this text is most likely also AI-generated, including deliberate typos or inconsistencies to evade detection. Their official blog features such howlers as “enemy’s” instead of “enemies.” Honestly, if I’m wrong about that, then my point about the harms of such programs is even further strengthened.
StealthGPT even includes a feature that allows you to take a photo of your paper worksheet and have the answers generated for you. If your school doesn’t already have a no-phones policy (why in the world not?), then a brief lack of vigilance can mean your students are snapping pictures of their in-class assignments and cheating right under your nose.
Even former academic standby Grammarly has now joined the ranks of the enemy. Haven’t visited Grammarly.com recently? It now reads: “You think big. We’ll take care of the details.
Your big ideas get a breakthrough when you write with Grammarly. Work with AI that gets what you want to say and brings your writing from first thought to final dot.” If you’re still recommending Grammarly to your students as a writing aide, know that they hear you recommending the use of generative AI. It can even helpfully adjust its tone to match the expected audience.
Older anti-plagiarism models like Turnitin rely on matching submitted content to pre-existing content present in their database–something which is close to useless for today’s modes of cheating. So, whether your campus software is flagging them or not, your students are using these tools. In every section of every one of your classes, this is happening right now. It occurs among the students being trained for the pastorate at my own university, and in every other program of every university everywhere.
Original Sin and the (Human) Nature of Schoolwork
Now that we’ve painted the situation as sufficiently dire, we may ask what might have prevented it, and what we might do moving forward. The answers to both of these questions lie in a Christian account of human nature.
First, nothing could have prevented it, except a different outcome in a particular garden a long time ago. Human beings are fallen creatures, inclined away from the good since birth and disposed toward sin. That includes our students, even though both Christ and their parents and teachers will do their best. What should we do when creatures slanted toward sin find an easy opportunity to indulge it?
All human beings, Aristotle tells us, desire to know. That seems optimistic nowadays. Some human beings are fleeing from knowledge as fast as they possibly can. What strikes me as more assured is the maxim that all organisms seek the most efficient path to achieve their goal. Now, sometimes the goal is difficult. There are no ‘easy’ paths to the surface of Mars, for example. But some are easier than others, and nobody deliberately chooses the more challenging path unless the harder path is in fact the point. If your goal is to climb the most difficult face of Everest, you will still take the most efficient path along that face.
The Fall does not remove our humanity; it disorders our loves. As such, human beings still desire happiness and seek to reach it by the most efficient path. The problem is that we are so often mistaken or willfully obtuse as to what happiness really means and which path is really best.
What is the goal of most students today, at any level? For most, it is not education but graduation. They want to walk across a stage and be handed a credential that will unlock a desirable job, which is most likely desirable because it is financially lucrative. The process of learning, of being shaped, is an extraneous spandrel, an inconvenience that must be surmounted. What, then, is the most efficient path to that goal, the goal of graduation? Today, it’s AI. A student can easily use AI to achieve their intermediate goal, a good grade, which will then reach their ultimate goal, completing school.
Humans have an inclination to lawlessness because we are all born with original sin. Sin is lawlessness, 1 John 3:4 tells us. We are all natural consequentialists. An act is only wrong if it is harmful, and it’s only harmful if I get caught. As such, just as in the garden, the low-hanging fruit of an instant, effortless A is a real temptation. This is especially the case when the digitization of every level of education enables AI to work seamlessly with the way you compose and submit your assignments.
We might categorize this as a problem of moral luck. Philosopher Thomas Nagel writes:
What we do is … limited by the opportunities and choices with which we are faced, and these are largely determined by factors beyond our control. Someone who was an officer in a concentration camp might have led a quiet and harmless life if the Nazis had never come to power in Germany. And someone who led a quiet and harmless life in Argentina might have become an officer in a concentration camp if he had not left Germany for business reasons in 1930.1
Our students find themselves thrown into an educational atmosphere in which they are encouraged by authority figures to meet their goals through easy, fast, effective, and seemingly harmless use of artificial intelligence. They are culpable for their choice of academic dishonesty, but they are also following their own most basic tendencies in an environment that almost irresistibly encourages them toward this vice.
We must, it should be stressed, resist the urge to begin to look askance at every assignment, narrowing our eyes in suspicion and wondering what nefarious new way of looking innocent our students have ferreted out. It is very easy, within the same educational atmosphere, to turn the classroom into a battleground between students and teachers —a Hobbesian war of all against all — that plays directly into the technological escalation already running rampant. Nevertheless, we cannot be naïve about the temptations our students face.
What is the solution? Let us engage in a little reader-response exercise based on canto 24 of Dante’s Purgatorio, in which Dante watches souls being cleansed from the sin of gluttony:
Beneath the tree I saw shades lifting hands,
crying I know not what up toward the branches,
like little eager, empty-headed children,
who beg—but he of whom they beg does not
reply, but to provoke their longing, he
holds high, and does not hide, the thing they want.
Then they departed as if disabused;
and we—immediately—reached that great tree,
which turns aside so many prayers and tears.
“Continue on, but don’t draw close to it;
there is a tree above from which Eve ate,
and from that tree above, this plant was raised.”2
I leave aside the comment about empty-headed children, but I note the image of these children (unformed, uninformed, impatient) begging for someone to hand them their goal simply. Here, a luscious green tree bearing succulent fruit confronts the hungry pilgrims, and we are told that it is a sapling of the tree of the knowledge of good and evil, which tree is itself waiting, now unrestricted and free to enjoy, at the top of the mountain in the garden of Eden. These souls are not yet ready to taste of the tree, because they would be short-circuiting the process of spiritual formation. But, importantly, God has decreed that this goal should be readily visible here and now, though seemingly unattainable.
We ought to imagine ourselves, holding our diplomas high above the heads of our small and hopping students, provoking their longing. But just like these pilgrims, who realize that the only way to fulfill their hunger is to move further up the mountain, we ought to dangle that diploma until the student learns how to build a ladder to reach it. At some point, they should realize that they spend more effort trying to circumvent the process than by simply doing the work correctly. At this moment, we begin to work along the grain of human nature, encouraging students toward the thing they love, but also helping them to realize the properly ordered way of achieving it.
We ought to strategically ensure that the most efficient path toward reaching graduation also just so happens to be education. We can’t save a student from all intellectual sin by changing our assignments. But we can prevent some. Keeping sweets out of the house doesn’t change my character, but it does make specific acts of gluttony less likely. We can construct a setting in which academic dishonesty is not so tantalizingly pragmatic. We can also, through the way we structure these assignments, build particular habits of intellectual virtue by making good academic practices the most efficient path.
Make AI the harder choice
The most obvious and therefore most commonly suggested method of ensuring that students turn in genuine work is returning to in-class exams, essays, and assignments. The flipped classroom approach is suited for these sorts of assignments. In a flipped classroom, the effort to understand happens at home, and demonstrating that understanding through a skill occurs at school. Now, what AI can do very easily is fake a skill. The thing it fundamentally can’t do is understand things. It’s a zombie, remember?
Assignments are either summative evaluations meant to test comprehension or formative practices designed to develop particular skills. By relegating essays, worksheets, or other activities intended to foster independent critical thinking, synthesis, and problem-solving to the home, we are matching the practice in which AI cheating is most detrimental to the environment in which it is most likely to occur. In contrast, by relocating these elements to the classroom, we are doing the opposite. A good assignment requires apprenticeship, and it can be done in class, under the guidance of the teacher, removing most of the possibility of faking it. The most impactful practice is also taking place in the most secure environment.
In turn, passages, lectures, or lessons should be digested at home. This trains the student to understand material independently and to verify that knowledge through assignments, which they return to the teacher for guidance. Yes, AI can allow for dishonesty here, too, but it’s much more easily detectable. AI cannot and will never be able to transfer understanding into your mind magically. It can summarize some assigned readings or lessons, but (a) it’s often wrong, and (b) such fake knowledge will become very evident when the student is asked to demonstrate it on the fly in the classroom, apart from their device. If the knowledge is only surface-level, then by definition, merely scratching the surface will expose this. If the student is working hard enough at cheating with AI on their own to demonstrate sufficient understanding for your in-class tech-free assignments, then at least they’ve learned something along the way, and at the most, they may grow tired of it and realize they may as well do the real thing.
There are particular ways of encouraging this in your assignments.
For example, expand your rubrics to measure more elements of your desired outcome explicitly. My essay rubrics include six categories: length, number of sources, mechanics, structure at both the paragraph and essay level, textual support, and tone/expression. In other words, I’ve given students a very clear picture of what exactly will constitute a good paper. It will involve a defensible thesis, paragraphs, and sections that are logically connected in an order that makes the most sense. This will include clear topic sentences with textual evidence demonstrated by specific quotes and page number citations to the assigned texts, the use of all assigned texts, and more. Measure the particular ways that you want students to engage with the material, focusing on ideas and not just mechanics. A one-paragraph summary from a chatbot will not enable them to meet these criteria, but strong active reading skills will. Again, this does not inerrantly eliminate all academic dishonesty, but it can hopefully curb it.
What about outside of the humanities? In the same manner, mathematics or chemistry problems should demand not only the correct answer but also that age-old standby, showing your work. If students are completing these assignments in class rather than at home, then sneakily copying so much information from a device under the table becomes much more likely to be detected, and therefore not worth it. Besides, they are better able to ask for your help when they encounter difficulty, and you should encourage such questions. Show them that you are pleased when they struggle for understanding, not when they breezily produce the correct answer. This resists the instrumentalization of knowledge. That is, after all, the point, and it brings us to perhaps the most important bit of recalibration we need to undertake.
Changing loves
I’ve argued here that most students will attempt to game the system to exert the least amount of effort to achieve their goal. Efficient work is not sinful; it’s a part of creaturehood. But a goal can be futile, and certain forms of efficiency can be immoral. Contrarily, a goal can be eminently admirable, its efficient attainment a marvel of discipline and diligence.
Again, if the goal is graduation–getting this seemingly pointless school thing out of the way–then AI will seem very attractive, and it will be our job to try to problematize that path and hopefully encourage those students to fall in love with real learning along the way. An ounce of prevention is still worth a pound of cure, though.
Our most efficient path, and a healthier one as well, is to focus our efforts on changing our students’ loves and, therefore, changing their goals. Do we discuss with our students why simply securing a well-paying job will not satisfy them? Do we explain how education helps us build our own excellence as human beings and cultivate a delight in the world? Do we detail how the work we ask them to complete is not given to ease our own deep yearning for assignments to grade, but to provide them with training in particular skills? Do they know why those skills are valuable? To return to Dante, we ought to be painting a picture of the paradise of virtue at the top of the mountain, the true fruit that they don’t even know they’ve been longing for all along, where we can say with the teacher Virgil:
Await no further word or sign from me:
your will is free, erect, and whole—to act
against that will would be to err: therefore
I crown and miter you over yourself.3
It can hardly be put any plainer than this: the true goal of life is the freedom to be holy, and it’s hard to achieve that freedom if you’re beholden to a machine to think.
