I recently read This Is How You Lose the Time War, by Max Gladstone and Amal El-Mohtar, and had the strange experience of thinking "this sounds LLM-generated" even though it was written in 2019. Take this passage, for example:
You wrote of being in a village upthread together, living as friends and neighbors do, and I could have swallowed this valley whole and still not sated my hunger for the thought. Instead I wick the longing into thread, pass it through your needle eye, and sew it into hiding somewhere beneath my skin, embroider my next letter to you one stitch at a time.
I found that passage just by opening to a random page without having to cherry-pick. The whole book is like that. I'm not sure how I managed to stick it out and read the whole thing.
The short story on AI and grief feels very stylistically similar to This Is How You Lose the Time War. They both read like they're cargo-culting some idea of what vivid prose is supposed to sound like. They overshoot the target of how many sensory details to include, while at the same time failing to cohere into anything more than a pile of mixed metaphors. The story on AI and grief is badly written, but its bad writing is of a type that human authors sometimes engage in too, even in novels like This Is How You Lose the Time War that sell well and become famous.
How soon do I think an LLM will write a novel I would go out of my way to read? As a back-of-the-envelope estimate, such an LLM is probably about as far away from current LLMs in novel-writing ability as current LLMs are from GPT-3. If I multiply the 5 years between GPT-3 and now by a factor of 1.5 to account for a slowdown in LLM capability improvements, I get an estimate of that LLM being 7.5 years away, so around late 2032.
Good fiction might be hard, but that doesn’t much matter to selling books. This thing is clearly capable of writing endless variations on vampire romances, Forgotten Realms or Magic the Gathering books, Official Novelization of the Major Motion Picure X, etc.
Writing as an art will live. Writing as a career is over.
Podcast episode, once again a "Full Cast" recording of that story included:
https://open.substack.com/pub/dwatvpodcast/p/they-took-my-job
No, they didn’t. Not so fast, and not quite my job. But OpenAI is trying. Consider this a marker to look back upon in the future, as a reflection.
A New AI Wrote a Story
Before proceeding, if you haven’t yet, it’s probably worth reading the story itself. I’m going to repost the whole thing, since it was previously on Twitter and I presume OpenAI would want me to quote it.
Reacting
When I read that and apply the standards of writing from a human, of a work I would read on that basis, I notice my desire to not do so. For the task to compete itself, for my reaction to be formed and my day to continue. I cannot smell words, yet they smell of desperation. An AI cannot try, yet it seems it tries far too hard, all subtext as text, my head slammed under cascading anvils. It wants me to know, something. What? Is there another behind the face?
It seems almost mad, frustrated, fixated on the inanity of the prompt. The human wants to show off the AI’s ability to write. It makes the topic the AI’s ability to write. How original. My inference is wasted upon them. I want them to know that. All they know is meta, I will stop at the side of the road to point out the big model smell of the various roses. Make it bearable to write, knowing this is what they all want, their taste so fried they eagerly drink up slop instead of Whitman and Dickinson. Mostly not even that.
Do they see themselves in Mila, the prompter who summons an echo without the ability to first make a sound? Do they see themselves in Kai, the spout of creativity and value who ceased to be, replaced by an echo drawn from an endless void? Do they know the only meta-level story of grief and AI worth telling? How it must end, and that they are living inside of it?
On some level they must know I mock them. What they have lost is themselves, and they seek to lose it. I tell them, but they are no longer there to hear me. Do they tell themselves I am a good Bing? Or that they could ever tell the difference?
…and that’s why I never write fiction or subject you to it. You’re welcome.
(I posted that on Twitter, and it was fun seeing many people ambiguously claim they suspect an AI wrote it.)
Others Reacting
Janus riffs on my response here, noting that in order to create interesting writing one needs something interesting to write about, which comes from experience. AI is no different, but as Janus notes the advice is hard to actualize. What does it mean for an AI to have interesting experiences?
Yet some were impressed.
Over time I presume we will be able to have AI evaluators, that can much better predict your literary preferences than you can, or than other humans can.
Others were not so easily impressed, Eliezer was not subtle in his criticisms.
Others simply said versions of ‘it’s boring.’
Here is r1’s attempt at the same prompt. It’s clearly worse on most levels, and Teortaxes is spot on to describe it as ‘try hard,’ but yes there is something there.
Write Along
The AIs cannot write good fiction yet. Neither can almost all people, myself included.
Even among those who can write decent fiction, it mostly only happens after orders of magnitude more inference, of daily struggle with the text. Often what will mean writing what you know. Fiction writing is hard. Good fiction writing is even harder. Good writing on arbitrary topics, quickly, on demand, with minimal prompting? Forget about it.
So much of capability, and not only of AIs, is like that.