Apple Pie and Rocket Ships: A Hopeful Vision of Life with Artificial Technology

January 7, 2026

This article is an adaptation of the final chapter of my forthcoming book, Saint Antony’s Guide to Surviving the AI Apocalypse. What follows was written with parents in mind, but I think it applies to educators—particularly classical educators. I hope you find it useful.

Not so long ago, families lived in the future.

Perhaps the “campiest” show on television in the 1960s was Lost in Space. Okay, maybe not the campiest—Gilligan’s Island wins the prize. But Lost in Space had castaways too, but they were marooned on a deserted planet instead of a deserted island. And it had a spaceship (the Jupiter 2), a robot, and lots of cool gadgets. It even had a dad, a mom, and kids. I watched it mainly for the gadgets, not the family-friendly storylines. But if I were to watch it today, it would be for the family fun, camp and all.

It wasn’t the only future-family show back in the day. There was The Jetsons—a cartoon on primetime that book-ended The Flintstones, a cartoon about a Stone Age family that lived in the town of Bedrock. But what these future families had in common was that they were relatively traditional, even with the space-age tech. Family-centered shows set in the future, past, and even the present disappeared somewhere, and I’m not entirely sure why. My guess is it had to do with the drift of the culture and changing tastes.

Since then, science fiction has grown steadily grim and despairing. Grim isn’t entirely new—it’s been central to science fiction going all the way back to Frankenstein, maybe even the myth of Icarus. But despair is new. In the old stories, a baseline of everyday goodness wins in the end, every time.

Today, science fiction serves up a heaping helping of dread. Perhaps it’s because we suspect we can’t beat technology and that in the end it will either delete humanity or swallow us whole. The early 70s were a turning point. When it came to films, apes took over the planet, or a virus wiped out the human race, or the human population grew unchecked, and ecological collapse followed. And in each film, the last man standing (and screaming) was Charlton Heston.

And the nightmares might be coming true. Today, the best technologists offer is a merger with the machine, a post-human life that does more to degrade us than enhance us. And it all gets dressed up in the drag of inevitability. The future will be something like Star Trek’s Borg or Blade Runner or, heaven forbid, The Matrix.

But is it inevitable?

Of course not. But it will only be prevented by something more powerful than economic forces and the relentless march of technology.

Christian Humanism

I came across a neologism recently: “fundamentalist humanist.” It was in Ray Kurzweil’s book, The Singularity is Nearer. I chuckled when I read it. I’m old enough to remember A Secular Humanist Declaration, a cranky declamation signed by the likes of Isaac Asimov and B.F. Skinner in 1980. Were they “fundamentalists” too, seeing as they believed in something called “humanism”?

I doubt it. Secular humanists didn’t really believe in humanity; they just disbelieved in revealed religion, especially Christianity. They were more against that than they were for humanity.

I, on the other hand, believe in both humanity and Christianity. My guess is Mr. Kurzweil has people like me in mind with his aspersion.

The book of Daniel predicted that knowledge would increase. And it is, exponentially. At least on that matter, Scripture and Kurzweil agree.

But there’s knowledge and then there’s the other knowledge—things we have a right to know, and things we don’t. When Daniel says knowledge will increase (Daniel 12:4), I think he means both kinds. We don’t make distinctions like this much anymore, but there was a time when that distinction made the difference between life and death.

Today, curiosity is a virtue—even an unqualified good. But we should remind ourselves that there was once a saying about cats. And we had stories about boxes that shouldn’t be opened, and fruit that shouldn’t be eaten—because sometimes knowledge can kill you.

The worst-named movement in human history made those stories seem benighted, but it was the Enlightenment that was in the dark. Some things can’t be known, and other things shouldn’t be. Getting back to the dead cat, what killed it was curiosity.

James tells us that wisdom has two sources—one from above, and that wisdom is pure, and spiritual, and another from below, and it is carnal and demonic.1 It’s the second kind that will kill you.

This distinction is even evident outside Scripture. Plato said there are things we shouldn’t know, not because knowledge is evil, but because we are. The story he tells to make his point is about the “Ring of Gyges” in The Republic. In it, a farmer finds a magic ring that makes the wearer invisible (sound familiar?), and according to Plato, the man who found the ring was corrupted by it. Actually, it would be fairer to say that he was already bad, but the ring made it possible for him to get away with bad things. In the end, he killed a man, stole his possessions, took his wife, and made himself king. He used the ring to learn secret things, and then he used what he learned to his advantage. His knowledge was asymmetric–he knew without being known. (This reminds me of the titans of the “information economy” who know all about us while remaining hidden.)

But it is the story of Pandora that is the most apropos for our moment. In that story, Pandora knows that many evil and terrible things are locked in the infamous box given to her husband by the gods. But curiosity was the itch that she just had to scratch. And like her, we know that there will be a host of evils that leap out once we fully open the AI box, but it doesn’t matter. If we don’t, the Chinese Communist Party will.

Even so, someone needs to speak up for limits. I suppose it will have to be me.

Limits That Liberate, Freedom that Enslaves

While it is true that limits can be annoying and even stupid, the Libertarian fantasy of a world without limits would inevitably produce something far worse than unnecessary laws; it would lead to chaos and, ironically, slavery.

Booker T. Washington, in his book, Up from Slavery, had something surprising to say about slavery.

Among some African Americans, he’s controversial because he made a case for bottom-up development, beginning with trade skills and manual competence, and only then moving on to professions with social prestige. W. E. B. Du Bois famously promoted a more ambitious approach, aspiring to the heights of social status right from the start. As you might tell from my tone, I favor Washington’s approach, not just for ethnic minorities, but for everyone. And the reason is something Washington observed.

Washington noted that over time, slave owners grew incompetent when it came to managing their own affairs, becoming slavishly dependent on their slaves. While it is true that slavery degrades people, according to Washington, it does that to everyone associated with the institution—even slave owners. In the end, they couldn’t do the simplest things—cook food, mend clothes, or paint fences. They’d lost the ability to care for themselves.

Will we fare any better when we rely on omnicompetent machines to do everything for us? Will it even occur to us that something is wrong, that we are actually doing violence to human nature by depending on machines to do things we ought to do for ourselves?

Saul Alinsky, in his Rules of Radicals, called it the “Iron Rule.” “Never, ever do something for someone that he should do for himself.” Alinsky—for all his faults—had this one thing right: doing things for people that they should do for themselves undermines self-reliance and makes them dependent. There is no shortcut to competence. You learn to write by writing, you learn to think by thinking—not by having people or machines do those things for you. It’s as old as Aristotle because we’re talking about human nature—and there’s no hack for getting around it. We’re on the cusp of Idiocracy, a generation of pasty-slugs who can’t “adult,” let alone command respect, because we’re about to give over our agency to machines.

The studies are confirming it: AI makes you dumber. One published in the social science journal Societies, titled “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”2 by Michael Gerlich, documents an enervation of critical thinking when people “offload” difficult mental work to artificial intelligence. The old saying, “Use it or lose it,” applies to everything, even intelligence.

While it is true that ever since the calamity in the garden, there has been no end to the making of laws, let’s remember that the reason there was a calamity in the first place is because people were not content to live with just one simple law. Adam and Eve believed it kept them from being “everything they could be.” But the original law kept them human, and that was a good thing.

To remain human, we will have to establish limits for ourselves and for our machines. And those limits must be as inviolable as the command not to eat of the tree. They should read something like: “Humanity is the image of God. You shall not replace him, nor do anything for him that he should do for himself.”

It won’t be easy. Applying limits to Artificial Intelligence will call for intelligence, imagination, and discipline. And don’t expect Silicon Valley or Washington, DC to do it for you.

Apple Pie and Rocketships

How should we apply limits to our machines (and ourselves!) if the prophets of artificial intelligence have it right? Here’s my recipe. I call it “Apple Pie and Rocket Ships.” There are a couple of ways to cook it up. I call the first, “Neo-Amish World with an AI Twist,” and the other I call, “Butlerian Jihad-Lite.”3 Consider the short openings to two unfinished stories are written in the spirit of Ray Bradbury, the author who I believe best exemplifies my ideal of Apple Pie and Rocket Ships.

Here’s how the first story begins.

– – –

Neo-Amish World with an AI Twist

“I think it’s time to tell Johnny,” Dad said.

“I don’t know, I’m not sure he’s ready. Remember what happened to Sammy when the Millers told him too soon,” Mom said.

“Or when the Gills waited too long. Their boy is lost to them.”

“I wish we didn’t have to tell children anything. Why can’t we just live the old way?”

“You know,” he said, looking out the kitchen window.

It was a beautiful morning. A field of wheat stretched behind the house to the horizon. Mist rose as the last of the summer sun burned away the dew. Their son, Johnny, was in the fenced area between the house and the wheat, bringing the eggs in from the coop. Their black Lab, Maestro, walked beside him. They seemed to be in conversation. Johnny looked to be about 9 years old.

“I’ll tell him today,” Dad said.

Mom wasn’t pleased, but she didn’t say anything.

The conversation with Dad changed Johnny’s life.

He learned he’d be going to school in the coming year, and he’d meet some new kids, and he was a little apprehensive about that. Until his talk with Dad, he’d been homeschooled, mostly by Mom. Although Dad did take him out to the shop to help with things, mainly sharpening plows or oiling the old tools and machines. There were other things he did with Dad, like math. He liked it and had a knack for it. Mom taught him reading and penmanship. She’d have him make long, loopy letters, being careful to stay between the lines. It was important for him to practice. “Fine motor skills develop in childhood,” she’d said. That’s also why he practiced piano every day, even though he wasn’t very good at it. She wanted him to read music and to learn to sing. He did enjoy singing, especially with the kids at church. They were even learning parts.

But by far Johnny’s favorite subject was science, and again, mostly Dad helped with that. And the conversation had been about that—that, and the back of the shop, the part that stayed locked.

Labor Day finally arrived, and baseball season was in its final stretch. They’d gone to see the Mud Hens the night before as a special treat. And on Tuesday, after his chores were done, he had breakfast with Dad, Mom, and his two kid-brothers. Mom made waffles, but she looked sad, and everyone could tell something wasn’t right. His brothers usually fought over the last waffle, but they let him have it this time without any fuss. Mom and Dad didn’t look at each other much. Finally, Dad glanced at the clock; it was getting close to 9:00. He got up and said, “Let’s get going.” Johnny looked at his mom, but she was looking down, and he thought he heard her sniffle. Then she got up and left the kitchen.

He wanted to follow her, but Dad was waiting. His face was set, and Johnny knew the look. He’d better not ask.

As they walked to the shop, they went around the barn, which was mostly empty since he’d let the animals out earlier to graze. Things were so quiet that the crunch of their boots on the gravel path seemed louder than it should.

When Dad flipped the switch in the shop, fluorescent tubes buzzed on, spilling blue light over everything: old benches and old tools strewn everywhere. It smelled of wood and greased metal. They wound their way to the back and came to a door.

Johnny had never been on the other side of it. When he saw light coming from under it, he knew Dad was working on something behind it, but he’d never actually seen him go in or out. That’s why the electronic lock surprised him a little. It lit up when Dad put his thumb on it. Then the latch clicked, and the door swung inward. Light came on, not like the fluorescent lights; the room itself seemed to radiate. Then Dad stepped through, and Johnny followed. And they were greeted by voices Johnny never dreamed of hearing. What he saw when his eyes adjusted made him catch his breath.

– – –

Notice the buildup. What does Johnny see, and where does the story go from here? I’ll let your imagination work on that.

But before you do, let’s see how I’ve front-loaded the story.

This is a story about the future, even though it seems to be set in the past. It reflects my belief that in significant ways, we’re going to have to preserve the past if we expect to remain human in a future filled with artificial intelligence. More about that in a minute.

Does Johnny’s family live this way because they’re weird? There are hints that they’re not alone. Other families are referred to who presumably face the same choices Johnny’s family does. They belong to a church and they attend a baseball game. So, in some sense, there must be a larger community that shares their way of life. They’re not off-grid, utterly on their own.

So, why do they live this way?

Is it merely because they like living in the past, or is it based on an informed choice? Evidence seems to indicate that a relatively low-tech way of life makes for the best outcomes when it comes to raising children. That’s because human beings were made for the analog world, not a digital one. Smartphones, for instance, have proven harmful to the psychological and social well-being of young people.4 This is why analog is the Apple Pie we must keep making for ourselves if the future is going to remain human.

Still, AI looms in the background of the story. And sacrifices must be made to it. The brightest and the best are tithed to serve as gatekeepers and watchdogs so that Apple Pie, the symbol of a human world, remains undisturbed. But why keep AI at all? I can imagine many reasons: healthcare for one, or military defense, or banking, or scientific research. AI serves as a sort of envelope helping to preserve an intentionally analog way of life—but with modern, ever-improving digital technology in the background, ready to intervene when necessary. But the biggest reason AI is still around is that they can’t turn it off even if they wanted to. It won’t let them. In some sense, there is an uneasy apartheid in this imagined future.

I admit this is far-fetched, but it’s intended to be a thought experiment based on two convictions I’ve already pointed out: first, analog lifestyles are healthier than their digital alternatives, and second, artificial intelligence is here to stay, but we will need to limit it somehow if we’re going to live with it and not be co-opted by it.

So, my first story separates AI from daily life, but is there another way to approach it? Yes, but it might seem even more far-fetched because it calls for discipline—discipline by people, and discipline by artificial intelligence—and discipline even by the people behind it.

Let’s return to Johnny and his family. Here’s another way the story could go.

– – –

Butlerian Jihad-Lite

It was just before the time he’d have to get up. He was in an old-fashioned bed, not the kind that suspended you in water, like amniotic fluid (Cybers used those). That’s why he’d never see one. “It’d make you as soft as a Cyber’s head,” Dad said when he asked about them.

Suddenly, the lights came on, not like a slow sunrise, like in some homes–instead, one moment of darkness, the next, light, like the first Day of Creation. He squeezed his eyes tight. He knew what would come next.

There were padded plastic footsteps on the carpet.

“Time to get up, Master Johnny,” Teddy said.

He pulled his sheet down and glared at the plastic and metal man. It was his robot, but he wasn’t supposed to call him Teddy. His proper name was TDX-5, not that he ever used it when Dad and Mom weren’t around. They were careful about such things. No emotional connection with machines was allowed in their home.

“I don’t want to get up,” Johnny said.

“But you must, Master Johnny. Your mother said so.”

Johnny rolled away from the robot; he didn’t expect what came next. Suddenly, his covers were gone, and a blast of cool air swept over him.

“Hey!” he said, sitting up quickly.

His covers were in one of the robot’s mechanical hands.

“Your mother says I’ve been too easy on you. She’s given me permission to be more proactive in your discipline.”

“Okay! Okay! You slave driver, I’m up! Cybers make friends with their robots, but you’ll never be my friend!”

“That’s the idea, Master Johnny. Now, out of bed. Breakfast is in 13 minutes and 17 seconds. You just have time to get there when waffles are served.”

That got Johnny moving.

When he got to the kitchen table, his kid brothers were working on their penmanship. It was cursive today—they made long, loopy letters, being careful to stay within the lines that ran like railroad tracks across the real paper pages. No tablet and stylus for them—old-fashioned pencils and old-fashioned paper. Mom insisted.

The smell of waffles filled the air. Mom was at the counter with the waffle iron and a big bowl of batter.

“Alright, clear that away and set the table,” she said.

The boy’s personal robots sat still against the wall. There’d be no help. They knew to leave this sort of thing to the boys. Paper and pencils were quickly replaced with plates and utensils. Johnny made sure to get the maple syrup—the real stuff, not goop from a food printer. That was the way it was in their house; they raised some of their own food, and what they didn’t raise, they got from the church’s farm.

Dad walked in.

“Waffles? What’s the occasion?” he said.

“Johnny’s piano recital this afternoon,” Mom said.

Johnny felt his stomach turn. He’d forgotten. He looked at Dad for sympathy.

“Okay,” Dad said. “And after that, we’ll go see the Mud Hens. I hear they’ll be in the postseason if they win tonight.”

Johnny’s mood brightened, and he shoved a sticky wad of waffle into his mouth.

Just then, the lights dimmed, and the robots all hummed louder than usual.

“AI attack,” Dad said, mostly to himself as he looked at the light over the table.

“It’s a pretty strong one,” Mom said.

Then the lights brightened, and the robots went back to their regular hum.

“I think we’re okay,” Dad said.

Mom’s robot gave a happy beep. Dad’s bot was in the shop, and beeps were all Mom let her robot say.

“Good,” Dad said. “I’m glad I upgraded the AI.”

“What did the bad AI want?” Timmy, one of Johnny’s brothers, asked.

“Who knows, there’s always something.”

– – –

Okay, story over. That’s enough to work with. Let’s look at what I’ve packed into the beginning of this story.

Here, society has split into two ways of living with AI—full adopters, those who have themselves become “Cybers”; and partial adopters, those who remain Apple Pie humanists, as we see with Johnny’s family. My guess is that even the most reluctant people will be forced to adapt to life with AI, if for no other reason than making a living will require it. AI will be everywhere, as ubiquitous as smartphones are today. I suspect the days of the smartphone are numbered—they’ll be replaced by personal robots, as seen in the story, or with mobile gadgets that keep people physically plugged in nonstop.

I think smart families, like Johnny’s, will discipline themselves, and they’ll even use robots to help with that. A nonnegotiable tenet of Apple Pie humanism will be a clear line between humans and machines. The boundary will be based on some form of law, but a law that isn’t necessarily enforced by civil authorities.5 In my story, it’s Mom who is the most concerned with obeying that law and keeping the machines out of family life as much as possible.

Will it happen in the way described above? Who knows? But I am certain that something along this line will be called for if we’re going to preserve apple pie humanism in a world filled with intrusive machines.

Rocket Ships

We’re already enjoying the benefits of the technology, and we have been for a while. If you’ve got a smartphone, you’re using AI whether you realize it or not.

Here are just a few things that we’re already seeing artificial intelligence help us with that would be impossible without it:

1. Analyzing and summarizing vast reams of data. When 85,000 pages of documentation about JFK’s assassination were finally made public, within hours, it was summarized, and some of the juicier parts came to light. But how? Artificial Intelligence, of course. It is a particularly useful technology when it comes to finding needles in haystacks. But not only is it helpful when you know the needle you’re looking for, AI can help you find needles you never thought to look for. That’s because it sees patterns in things, and it can, in a manner of speaking, “guess” what you’d find interesting to know.

2. Sticking with its ability to envision patterns, even patterns of things that don’t even exist—or at least we don’t have any direct experience with—AI can imagine new possibilities. Take the AI known as Alphafold. It accomplished the herculean task of identifying the 200 million ways proteins can fold, and it did it in a year, not decades. This is important for life sciences and biotechnology generally, and who knows what may come from this, what drugs and what therapies might be developed as a result? Your life might be saved someday because of Alphafold.

3. The material sciences are on the verge of a revolution that will change the very fabric of the things we live with and use every day because of artificial intelligence. New materials are being developed that have properties that will improve everything from air travel to underwear. For example, what if you could make a material that is as light as foam yet stronger than steel? You don’t need to because AI already has. And it didn’t find it in nature. Instead, it was made—built at the molecular level. And who knows what else is in store, perhaps self-mending clothing, or even automobiles—AI is already working on them.

Up to this point, I’ve described things that either already exist or might exist very soon, and while they’re impressive, we are still not in the realm of the knowable. We have not yet touched on the mind-blowing sci-fi stuff that AI might make possible—time to move on to a couple of those.

4. Robots! AI is helping to make Robbie the Robot a reality, and it’s even given new life to Isaac Asimov’s rules for robots.6 In my last story, I focused on the use of robots in the home and in the education of children, but the real significance of robots will be industrial in nature. We’ve had single-application industrial robots for a while, but they’re in the same category as the mechanical loom. They have narrow ranges of application. What we’re about to see is humanoid robots like “Teddy” in my story. And they’ll be everywhere. People will lose jobs to them, of course, but the best use of the robots will be for jobs no human could possibly perform. Since childhood, I’ve dreamed about space travel. And like Elon Musk, I’ve envisioned a colony on Mars. As a practical matter, that is nearly an impossible task for human beings to pull off. Living on Mars while the shelters go up seems suicidal and prohibitively expensive. Now, imagine robots of every sort, from single-function robots that use Martian soil to fabricate domed shelters, to humanoid robots that can communicate with and even repair and maintain them, and the impossible becomes feasible. Essentially, mechanical men would go where no man has gone before to prepare the way for us, whether we’re talking about outer space, the ocean’s depths, or even inner space, with nanobots moving through our bloodstreams. When it comes to nanobots, doctors could operate on patients remotely through those tiny machines—or perhaps AI would perform those operations autonomously, seeing that such a fine level of precision is simply impossible for humans to manage. Imagine swarms of nanobots targeting only individual cancer cells, unlike chemotherapy. Just put them in a syringe and shoot them into the bloodstream.

And this is just the tech made possible by AI—we’re also on the verge of breakthroughs in energy production and biotechnology.7 Combine AI with these developments, and we might see a world of hyper-abundance and rapid change. But such a change would require a lot of wisdom and self-discipline for Apple Pie humanists to embrace.

Now for a final bit of good news and bad news. I need to end with this note of realism.

5. AI is transforming military hardware and force projection in surprising ways. Imagine drone swarms coordinated by it. If a thousand tiny drones attacked an aircraft carrier, how would it possibly survive? Scale things up just a little, and hundreds of militarized AI-coordinated drones could wipe out entire armies. Naturally, militaries will adapt, but they will all have to use AI to do so. In such a world, the cost of fielding a fighting force might fall dramatically. Force projection is changing, and small states will be able to compete with large ones like never before. In such a world, what advantage do large states retain? Perhaps a new fragmentation of the world will follow, and a world of city-states is the result—each dedicated to its own philosophy of the good life and existing alongside cities with diametrically opposed visions.

This little foray of the imagination has had the modest goal of showing how a general-purpose technology that simulates intelligence could occasion a transformation of the world in many unexpected and (perhaps) delightful ways. But the most significant developments lie beyond the reach of any human imagination because the ability to envision them will depend on developments that have yet to be realized.

The End

Eschatology is a fascinating subject, which explains why so many people are obsessed with it. And in Protestantism, particularly conservative evangelicalism, people tend to belong to particular camps, or interpretations of “last things.” As you’ve read this article, you may have wondered about me and which camp I’m in.

Before I say, allow me to state that people tend to squeeze the daily news or social trends into their preferred eschatological rubric, whether there’s a good fit or not. Even Jonathan Edwards did that. He had an optimistic eschatology, which meant he thought things were improving for humanity due to the spread of Christianity (particularly Protestantism) and the sovereign superintendency of God over everything from the weather to wars.

And this is why he was optimistic about science and its application to relieve human suffering and toil. He was even something of an amateur scientist who loved the natural world, particularly spiders, believe it or not. Sadly, it was his faith in the scientific enterprise that contributed to his early death.

Not long after he’d arrived in New Jersey to serve as the president for the college that we now call Princeton, Edwards volunteered to be vaccinated for smallpox because of an outbreak of the deadly disease. The science of vaccination was still in its infancy, and sadly, the great theologian contracted smallpox and died.

With this in mind, I’m ready to tell you which school of eschatology I subscribe to.

I am a post-millennialist.

But I am a post-millennialist who believes that bad things can even happen to people with an optimistic eschatology. I believe science and technology can harm us, even though, generally speaking, they’ve made the world a better place to live.

When I’m in a sour mood, usually after I’ve heard some Pollyanna wax on how the world is getting better and better in every way, I wonder if there were any post-millennialists in St. Petersburg in 1917, and if they had any intimation of the 70-year nightmare that was about to begin?

Just because you’re a post-millennialist doesn’t mean progress is steady or predictable. History doesn’t flow; it jerks, and it is full of sad endings, sudden reversals, and apocalyptic moments, as when something unexpected appears. And that’s what an apocalypse is, by the way: a sudden appearing, a revelation.

The story of the world will end happily because in the end, its creator and redeemer will appear. It will be a revelation, an apocalypse. And everything will change, all of a sudden, and unexpectedly.8 “And we will be like him, because we will see him as he is.”9 And there will be a new heaven and a new earth, “…for the first heaven and the first earth (will pass) away,…”10

In the meantime, bad things continue to happen to optimists, even to those who believe in the sovereignty of God. And that’s why it’s good to prepare for anything: bad, good, or indifferent. William Cowper, the brilliant poet who believed in the sovereignty of God, often struggled with mental illness and despair. He even spent time in an asylum and considered suicide. This has something to do with the depth and power of his famous hymn:

God moves in a mysterious way his wonders to perform; he plants his footsteps in the sea, and rides upon the storm.

Deep in unfathomable mines of never-failing skill, he treasures up his bright designs and works his sovereign will.

Ye fearful saints, fresh courage take; the clouds ye so much dread are big with mercy, and shall break in blessings on your head.

Judge not the Lord by feeble sense, but trust him for his grace; behind a frowning providence, he hides a smiling face.

His purposes will ripen fast, unfolding every hour; the bud may have a bitter taste, but sweet will be the flow’r.

Blind unbelief is sure to error, and scan his work in vain; God is his own interpreter, and he will make it plain.

Classical educators, in the AI future we’re entering, we will need people who still know how to bake up apple pie humanism. Your work will only grow more essential with time. You shouldn’t fear obsolescence. We will need your help to keep us human.

Notes


  1. James 3:13-17 
  2. Michael Gerlich, “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking,” Societies 15, no. 1 (January 3, 2025,): 6, https://doi.org/10.3390/soc15010006
  3. This is a tongue-in-cheek reference to Frank Herbert’s novel, Dune, and its backstory about a war between humanity and artificial intelligence known as, The Butlerian Jihad. 
  4. The Tech Exit: A Practical Guide to Freeing Kids and Teens from Smartphones, Clare Morell, 2025, and Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness, Jonathan Haidt, 2024. 
  5. I’m not opposed to laws of this kind; but it’s wise to prepare for a world in which we get very little help from governing authorities. 
  6. Asimov’s three rules: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 
  7. The very word speaks of a merge of man and machine.  
  8. 1 Thessalonians 5:2 
  9. 1 John 3:2 
  10. Revelation 21:1 
<a href="https://classicalchristian.org/classis/author/crwiley/" target="_self">C.R. Wiley</a>

C.R. Wiley

C. R. Wiley is a Presbyterian minister living in the Pacific Northwest. He has written for Touchstone Magazine, Modern Reformation, Sacred Architecture, The Imaginative Conservative, Front Porch Republic, National Review Online, and First Things, among others. His short fiction has appeared in The Mythic Circle, and he has published young adult fiction. He is also the author of In the House of Tom Bombadil, The Household and the War for the Cosmos: Recovering a Christian Vision for the Family, and Man of the House: A Handbook for Building a Shelter That will Last in a World That is Falling Apart. He is one of the hosts of The Theology Pugcast, and he has been a commercial real estate investor and a building contractor. He also taught philosophy to undergraduates for a time. He is the Vice President of the Academy of Philosophy and Letters, and a board member of New Saint Andrews College.