Can Artificial Intelligence Replace Writers? On AI, Meaning, and Emergence (And Marbles)
Is meaning an emergent property?
When we talk about artificial intelligence and writing, and how far AI can go to take over the work of writers, this is what we are talking about: the meaning that is conveyed through writing, and where this meaning comes from.
The capabilities of AI have moved fast and are still moving. The question of whether AI might replace writers (among others who work with information or knowledge) is in the air. And rightfully so—AI can certainly write. But whether AI can stand in the place of the writer is a different matter. Addressing this question entails examining what we expect the writer to do.
In the piece to follow, I will go on a journey to explore this question (scroll down and you’ll see this is a longish post). But “meaning” is the one word capturing the heart of where I am headed. A piece of writing must mean something for the reader. It must deliver meaning; it fails if this is not so. Further, that meaning affects the reader—this much follows as well. Aims of writing vary, and what we mean by “meaning” varies according to the intent of the piece. But any given piece of writing offers the chance that the reader might be brought to a new understanding, a new appreciation, or a new perspective or opinion on the topic at hand by means of what the writer has written. Each of these changes in the reader is a response to the piece’s meaning. Indeed, the reader likely undertook to read the piece in the hope of this very experience.
So here is the question: Does this outcome—that meaning is conveyed such that the reader is affected by this meaning—come out of the writing by itself?
That is, does meaning “emerge” out of sufficient complexity and consistency of the composition? Or is there another explanation: Does the meaning instead have a separate origin apart from the writing?
In other words, is meaning created through writing, or is it found?
Here comes the journey. We start with what AI can do, which leads quickly to a consideration of what we have long waited for it to do.
AI Can Write
There is little disputing that AI write. As anyone knows who has played with the AI platform ChatGPT, this tool produces credible composition in response to user prompts and does so almost instantaneously. What now seems apparent is that, if there is any formula to a category of writing, no matter how subtle that formula is, AI can find and follow the formula. The rules of grammar and the expectations of sentence rhythm and paragraph structure are followed. Even more, the formal and informal rules of different styles and formats of writing are followed as well, such that ChatGPT can write in the form of haiku, essay, a technical paper, a campaign speech, and more. That the writing it generates is “credible” does not mean the composition is accurate or useful to the spirit of the initial prompt, but even success along these measures will improve.
That is what AI can do, and that is the track along which it will continue to advance. The matter we are considering explores what that track might connect to, and where it can lead. Is there a straight line from algorithmic composition based on modeling from extant resources (what ChatGPT is doing) to the discovery and articulation of new insights or new frameworks for understanding?
You might have caught that the word “discovery” appears in that sentence. I might have used a synonym, but beyond that, there is no other way to describe this aspect of what we hope for a writer to do. The act of discovery implies a discoverer. To see this, to pose the question in this way, reveals how the question of where meaning comes from lands close to a broader and headier question: Where does consciousness of the seeker come from?
And this latter question is the one science fiction writers and technology futurists have wrestled with in their own earlier imaginings of AI.
Self-Aware AI and the Singularity
“Artificial intelligence” used to have a different meaning. This is worth recognizing. In the way science fiction and futurist writers have employed the term until very recently, “artificial intelligence” connoted computer intelligence approaching sentience or self-awareness. Think HAL9000, Lt. Commander Data, Skynet—famous fictional examples. This is not how we think about “artificial intelligence” today, which refers to a computation-driven tool widely recognized to be exactly that. The same phrase thus describes two different things. The reason this is worth recognizing is because we need to examine whether these two things have any connection. Sentience is something almost universally regarded as fundamentally human and personal: a defining capability of a human person. Can a machine also attain sentience? How would it do so? The question of whether and how a computer could obtain such consciousness begs the question of how we ourselves obtain it. Does our human consciousness “emerge,” naturally and automatically, from a brain that is sufficiently complex in its information processing? Or does human consciousness have an origin separate from this complexity?
The “singularity” is a related concept I will mention here. The term, used by futurist writers of the last decade or two, is valuable to raise because it bridges the AI of science fiction with what we refer to as AI today. The “singularity” refers to the idea or expectation that, at some point in the future, machine intelligence will come to exceed human intelligence in both complexity and speed to such an extent that it will be able to improve its own complexity and speed in ways we humans do not understand. To reach and cross such a threshold would be irreversible—it would truly be a historic singularity, or so the thinking goes—because beyond that point, human beings would never again be able to outwit or even comprehend the machine intelligence continuing to evolve farther and farther ahead of it.
The problem, the missing piece with that idea, is this: The notion of the singularity presumes a change in nature that the idea itself never explains. And this is too big a gap to ignore.
In the singularity, self-guiding intelligence appears. Mind is presumed to emerge from a mechanism.
Yet there is no apparent means or method by which this transformation occurs.
Are Chutes Aware? Or Showers?
I am still writing about writing; this will become apparent at the end. For now, let’s consider a little longer the leap that the “singularity” assumes: the expectation that mind will emerge from machine intelligence. To show how the “emergence” of a self-guiding intelligence requires an unexplained and unaccounted-for transformation, here is that emergence broken down into smaller questions identifying specific elements of the advance:
1. What is machine intelligence?
Answer: It is an algorithm. It is a set of programmed pathways for generating outputs to solve for a desired result, with that desired result specified within the problem input to the algorithm.
2. What is getting more complex as machine intelligence advances?
Answer: The algorithm is getting more complex. In AI, the branches and routings of the programmed pathways are changed as part of the operation of the algorithm itself to better solve the inputted problem.
3. What is getting faster as machine intelligence advances?
Answer: Processing speed is getting faster. The speed of arriving at each potential solution for a given problem is now fast enough to allow for rapid successive computational iteration through the programmed pathways.
Note how, in each of these answers, the input is present. Unavoidably so. The algorithm needs an input to produce its result. If the algorithm is seen as a mechanism, then in the absence of an input, the mechanism is idle. The question is whether the mechanism can make its own input.
We can visualize this. Imagine the mechanism of computation running on mechanical hardware rather than microprocessor hardware. That is, imagine a computer that is one trillion times slower and one trillion times bigger than the ones we know, because in place of deploying electrons for processing, this computer sends physical marbles rolling down plastic chutes. For this hypothetical, impossibly cumbersome computer, the problem under evaluation is defined by the user through the right arrangement of marbles (rather than the right arrangement of electrons). A sufficiently complex and well-constructed system of chutes would then output results expressed in different arrangements of marbles, and these results could even take the form of instructions to the army of workers attending this computer for additional chutes that need to be fabricated and attached to the system to realize outputs that are a better match to the input on the next iteration of marble computation. But even this latter result would arise automatically from the paths of marbles produced by their initial configuration.
This computer would probably fill a solar system. It would probably need two lifetimes of the cosmos to execute a program, but still: This imagined computer offers a fair picture. Machine intelligence is a mechanism of a sort. That mechanism sends signals (electrons, marbles) down paths. Input is needed to start these signals moving. There is no conceivable means by which marbles rolling down chutes—no matter how many marbles, no matter how complex the chutes—could spontaneously fabricate new marbles, or spontaneously alter the momentums of marbles in ways different from where the chutes are taking them, or spontaneously lift marbles out of the chutes to place them at the start of the chutes in a different arrangement that initiates a different input. None of these outcomes could arise simply from sending the marbles down the chutes. Our own real-life, digital machine intelligence is faster and more elegant than marbles and chutes, but the problem is the same: Neither complexity nor processing speed offers a path for enabling these characteristics to transform themselves into purpose or will.
I once read another analogy that is helpful for thinking about this. (I don’t remember where I read this; write me if you know.) In a rain shower, the path any water molecule takes is complex like the path of a signal in a computer. Each water molecule’s shift from vapor to liquid, its aggregation with enough other water molecules to be able to fall, what that molecule encounters on the way down, how it might be deflected—all these developments represent if-then conditions defining the molecule’s path, and this progression gets multiplied by however many billions of molecules comprise the rain shower. In short, if we believe that algorithmic complexity is sufficient on its own to allow sentience to emerge, then we must believe that thunderstorms become self-aware.
We Are Souls
All of this challenge to the idea that our consciousness is “emergent” is relatively easy for me to offer and to face. Some would say the notion that our sentient intelligence emerged naturally from the complexity of our brains simply must be true, because there is nowhere else for it come from. I have an alternate proposal. I believe I see a different origin for consciousness; I believe I know the more logical possibility for where human intelligence and awareness come from. We are souls.
We are souls. Obviously, I say that as a subscriber to a particular system of belief. But this faith does not invalidate the logical merit of the view within this discussion. The statement captures what might seem to be a vital point, held to be self-evident by many, and I phrased the statement in a particular way. The verb is important.
That is, we do not “have” souls. By “soul,” I mean the self, the divinely created selfhood, which might find a relationship with its creator. To refer to the existence of the soul solves every problem with the fact that physical complexity alone cannot lead to human consciousness. If we are souls, physical complexity does not have to deliver all of this this—and it cannot. The soul, the self, cannot emerge as an extension of the physical brain because this reverses the nature of things. The physical brain is instead the accessory of the soul. In my view, and perhaps in yours as well, we could lose these physical bodies including their physical brains while our personal selfhood continues. Eternal life proposes this very thing. We could not lose our souls and have personal selfhood continue—this would be a contradiction in terms—but we could lose the rest. The soul is the core of selfhood. The self is the soul.
We are souls.
To see the matter this way is to possess a secure footing from which to recognize that our bodies and brains did not spawn self-awareness out of physical complexity. Mechanism does not turn into mind. Complexity does not turn into consciousness. We are souls, and souls are joined to bodies during the passages they make through this world.
And to write, sometimes, is to speak soul to soul.
Now let’s come back to writing, back to the original question of whether artificial intelligence can take over for the fullness of what writers do.
It can take over some of what writers do. But if meaning is no more emergent than consciousness is, then this does not bode well for AI replacing writers.
Finding Meaning
Again, AI can write. The nature and purposes of writing vary widely. Some writing is simple reporting, without interpretation. For this writing, automatic aggregation of the information might work. If the nature of the reporting can be defined and the needed input data are all digitally available, then AI could do a job that formerly it took a human mind to do.
Yet there is certainly more to writing than this. And here is the thing: There is more to experiencing as well, and more to being. With regard to writing, with regard to being, in addition to the rote work of aggregation and composition, there is also the core work, the human work, the fundamental work that all of us are put here to do. We are to live.
We are to live, and we are to understand something about life through the living. We are to learn—about ourselves, about the natures of things, about God. We are to aspire by love through the individual hearts God gave us as he made each one of us unique. To come back to the term with which I began this essay, we are to experience meaning.
Meaning is inherently and necessarily the experience of a soul.
The writers, when they are doing their work well, not only experience meaning but also give words to it. They learn something fundamental about some facet or detail of something in creation that is worth seeing or considering, and they give words and voice to it so that someone—a stranger, another soul—can have and know this meaning also. Writing is this service.
And not just writers. Artists, performers, inventors, scientists—and many others—are in their own ways doing the same.
Composition can be knitted together via algorithms, and in some cases can be the work of AI. But meaning can never be created this way. Because meaning is not something we create.
Looking squarely at the question of whether consciousness can “emerge” provides an answer in passing to the question asked early in this piece. We are already running with that answer, but let’s now look at it directly.
Question: Is meaning created through the writing?
Answer: Meaning is not emergent in this way. Meaning is instead there to be found.
And a mechanism cannot find it. The mechanism computes from knowns. It cannot see what was previously unknown. Finding and expressing meaning is the work of writers. Further, it is the work we specifically ask and expect of writers, whether we recognize this or not.
So, AI can write. And writing will be aided by AI. We can already see how composition will be helped by this tool.
But as for the writing in total, in its fullness, this involves seeing. It involves finding meaning. And because meaning is not emergent and not created by complexity, because meaning has a different nature and a separate origin from what this computation is generating, the work of writing—the best part of the work—will always be by, and for, human souls.
Photo: “Using ChatGPT” by FocalFoto