There's been a lot of talk about how AI will replace writers. The recent explosion of ChatGPT has just increased the volume of the voices. As a writer, and a former writing teacher who has studied language theory and written two books on the craft (forthcoming), I guess I'm in a good position to address this—not definitively, but at least informatively. As a writer and former writing teacher, am I biased? Sure, but so is everyone else speaking on the topic. We always speak from a vantage point, a perspective. This is mine. What I share in this article is deeply rooted in my belief in the Trinity, in what it means to be a person, and in how technology is always a better servant than a master. My short answer to the question—Can AI completely replace human writers?—is no. Never.
My argument rests on two pillars: what writing is and what writing requires. Toward the end of the article, I'll suggest potential dangers that could result from assuming that AI does exactly the same thing as human writers.
What Writing Is
Some years ago, I wrote an article for Themelios that set out my definition of writing. I've also written a shorter article on it over at The Laymen's Lounge. Here's my understanding: writing is a trinitarian, image-bearing behavior by which we mark the world with our presence. For non-specialists, I'm just saying that when we write, we are mimicking something God does. In creation, the Father marked the world with his presence by speaking the Son in the power of the Spirit. He made things by speaking. His sound led to substance. By doing so, he revealed who he is (Ps. 19:1-4; Rom. 1:20). In redemption, once again the Father spoke (sent) the Word (the Son) in the power of the Spirit. In doing this, he marked us for himself, sealing us in his name (John 17:11). In both creation and redemption, we are written. God used words rooted in his own will to bring about things outside of himself.
Writing is a trinitarian, image-bearing behavior by which we mark the world with our presence.
We mirror this as human writers. In writing, we use words rooted in our will to bring about things outside of us. We mimic God and mark the world with our presence. Writing, in its simplest form, says, "I'm here, and this is who I am!" That's why writing is deeply personal on several levels. Buried at the base of every written expression is a freely willing person (or group of people). But then, of course, I'm assuming something about what a person is.
Persons and Communion
Without going too far afield, I believe that persons are communion creatures. I follow the Dutch theologian Geerhardus Vos on this point, and you can read more about that in "What Does It Mean to Bear God's Image?" Communion creatures seek to mimic God in every possible way and only find peace, security, and meaning in a deepening relationship with him. We are, to put it differently, God-leaning. All that we think, say, and do depends on our use of gifts that God has given us, gifts that reflect the nature of God himself.
For writers, that means everything we produce is, in some way, an attempt to commune with God and others. We might be writing to solve a problem, create a world, address an injustice, offer encouragement, or teach from experience. But all types of writing assume : I exist, I have something on the inside worth bringing to the outside, and I believe this is meaningful. Since existence, communication, and meaning are all rooted in God, all instances of writing are tied to communion with him: either an attempt to deepen that communion or an attempt to flee from it.
Since existence, communication, and meaning are all rooted in God, all instances of writing are tied to communion with him: either an attempt to deepen that communion or an attempt to flee from it.
This is all that lies behind the words we read each day. But there are some particulars that set human writers aside from AI, and that has to do with what writing—an image-bearing behavior by which we mark the world with our presence—requires.
What Writing Requires: Choice, Responsibility, and Trust
There are three basic elements to every personal act of expression: choice, responsibility, and trust. The first two are internal, and the last is external (i.e., getting readers to trust you). Each of these elements presents trouble for certain uses of AI.
The linguist I studied, Kenneth Pike, wrote about how choice is basic to language (his classic text is Linguistic Concepts). We are always choosing—how we see the world, how we express ourselves in relation to it, what we focus on within it. Personal choice, bound to the concept of free will, is an important part of being human. And writers are making choices with every letter they type, every word they select, every paragraph they construct. That network of choices reflects the writer's personality. Among the many things that make up a personality are convictions, beliefs, and experiences.
And let's be crystal clear on this: it is impossible for a machine or AI to have these things. ChatGPT doesn't have a personality; it's a construct (from numerous minds). Neither does it have convictions, beliefs, and experiences. Essentially, ChatGPT is a pattern-constructor built on a massive corpus of language (here's a helpful, in-depth article on what ChatGPT is and how it works). It makes a series of word selections based on algorithms designed to choose the most commonly occurring word sequences, drawing from a massive collection of texts on the web. Its creators have also added "randomizing" features to make messages sound more authentic. But the same thing is happening throughout: algorithms construct patterns based on real human usage.
AI dehumanizes your message because it takes that message out of the hand of a particular person: you.
That's why I have a problem when people get excited about writing their blog posts with AI. I know ChatGPT can sound natural (and it's amazingly accurate!), and the messaging may look the same as your own. The problem is simple: in crafting the message, you're not the one making choices anymore. And this removes the "human" element from the expression. It's not just a matter of being more efficient; what actually happens is that AI dehumanizes your message because it takes that message out of the hand of a particular person: you. It also creates problems for responsibility and trust.
As we make choices, we take responsibility for them. We become accountable, for better or worse. Praise and blame need someone to land on. If a writer is keen on using AI to produce blog posts or something similar, the least I'd be comfortable with is that writer publicly acknowledging that AI is being used. That way, at least the responsibility for using the AI rests on the author.
But responsibility goes deeper than just being "on the hook" for your message. We're responsible for what we express because that expression reflects our personal development and perspective on something. As I noted above, behind a piece of writing lies an author's perspective, but it's very difficult to engage with that perspective or treat it fairly if it's just an amalgamation of mechanical choices based on patterns of use. There's not a clear who behind the what. And that makes communication extremely difficult.
Communication is an attempt at mutual understanding, but we do this by sharing what Kenneth Pike called our "image of the world" (Rhetoric: Discovery and Change, p. 25). In other words, we offer others our perspective, our vantage point. And then they compare their own vantage point with ours. The end goal is to see someone else's image of the world. But how can a reader do this with coding? ChatGPT can present ideas to others (pulling from textual patterns in its database), but it cannot present an image of the world. Images of the world only belong to image bearers of God.
ChatGPT can present ideas to others (pulling from textual patterns in its database), but it cannot present an image of the world. Images of the world only belong to image bearers of God.
In our polarizing, tribal context in the West, we have enough trouble communicating as it is without removing persons from the direct process of communication. We are responsible for sharing our image of the world, our perspective on a topic, and doing so will allow for more authentic communication than we could ever achieve with algorithms . . . because algorithms aren't people with convictions and experiences and passions.
Lastly, trust is an often-overlooked component of communication, especially in a digital era. People trust other persons, and any "success" an author has for a particular message is a result of readers trusting that author—regardless of whether trust is really warranted in a given case.
Trust is a personal investment in the character of another.
Why is trust important, though? And couldn't people "trust" the messaging of an AI? Of course, people can trust an AI generated message, and have likely done so already, many times. But trust is a personal investment in the character of another. That character is shrouded in mystery if the messaging is constructed through AI. We can't decide on the character of the one constructing the message because . . . there isn't one (in fact, there are thousands, each drawn from the corpus database). Character includes values, convictions, passions. Again, these are things an AI cannot have. True, a person using the AI has these things, but then the AI puts a veil over those things by choosing word sequences that don't originate from the particular mind of the writer. And because of that, readers don't really know whom they're trusting. They also don't know who is really responsible, and who is making the important choices. It's all a cloud of algorithms. The writer is hidden. And I would say you can't trust writers who hide themselves.
AI vs. Human Writers
This may make me sound like a curmudgeon. Or maybe like I'm trying to fight against a piece of technology that's trying to steal my job. But I'm not. I use technology just as much as the next person. But I also have a window (an image of the world) into what writing is and what it requires. And what I've seen through that window gives me serious reservations about using something like ChatGPT for the construction of messages as coming from particular persons.
We have an embarrassing history of turning tools into tyrants. And when tools become tyrants, bad things will follow.
Having said that, ChatGPT has many uses for research and other tasks that make it valuable in our world. I know of friends who have used similar software for coding or image manipulation. And it's not hard to see how it might help in gathering resources for research or synthesizing data. AI not an evil. It's a tool. But we have an embarrassing history of turning tools into tyrants. And when tools become tyrants, bad things will follow.
As a friend wrote recently, we need to keep in mind that AI is not, strictly speaking, a creator. It's a creation of the created. And because people are more than patterns, we need to resist the urge to let something we've created do things that only we, as personal image bearers of God, can do.
Only real human writers image God by making personal choices, accepting responsibility, and calling for trust. That's why AI can never completely replace human writers.
Check out Wordsmiths for the Word
Note: This post contains affiliate links.