Dispatches, thoughts, and miscellanea from writer Jon Konrath

AI, writing books automatically

I have complicated and conflicting thoughts about AI. On one hand, everyone in my industry won’t shut up about it. On the other hand, I think it means I have between three and seven years before my profession is completely obsolete. And spinning the numbers, I think I need about ten solid years at my current salary to retire.

A few years ago, I read about how MLB was using some special software that could be fed a box score and a machine-readable database of statistics and spit out a human-readable narrative article summarizing the game. It would pepper the article with various factoids, slap in a stock photo, and you’d have a ready-for-web article to pull in traffic and SEO. When I saw this, I realized software documentation was not far behind. We’ve used tools like JavaDoc for decades to pull comments out of code and extrude a set of generated documentation. The next step is to spit out more conversational blog posts about a software product. That is either already happening or about to happen. But that’s work, and I don’t want to talk about work.

As far as self-publishing is concerned, it seems the current arguments are all about AI artwork for covers. I’ve only vaguely messed with DALL-E and nothing I’ve ever gotten out of it looked like book cover-quality, or maybe I’m too picky. There’s question about where the source material comes from, etc etc. I have no real opinion on this, except I wouldn’t use it because it looks like garbage. That said, I wouldn’t pay someone a ton of money for a cover and hope it would magically boost my book sales. Any time I’ve paid more than zero for a book cover, I haven’t recouped my costs. But my books are horrible/I’m not a good marketer/whatever. Anyway.

* * *

Three takeaways on AI, at least for the moment. Bear with me.

First, I think AI can be a useful tool for doing monotonous things, organization, or other busy work. I’ve forever wished that Scrivener had hooks to run AppleScript, or maybe had a REST API, so I could eventually write some useful tools for dumb stuff that could easily be done in AI. For example, I’d like to feed all of my writing into an LLM and then be able to write a paragraph and ask the AI “did I already write this?” because it happens constantly that I come up with some great idea and it turns out it was in a short story I published in 2006. It would also be great for things more advanced than a spell check, like looking at a story I just turned from first to third person and finding all the stuff that’s wrong. It would also be useful for things like feeding a thousand blog posts into it and having it add keywords to all of them.

All of these are good tools, but I have no time to think through how to write them. And my second point is there are a lot of tools springing up, but we’re in the gold rush era where a lot of fly-by-night businesses are popping up with the word AI in their product, and every existing product is working on how they’re going to duct-tape AI on the side. And all the usual complications come up here: tools are half-baked, and don’t exactly do what you want, and won’t output text in the right way, and steal your data, and have some scammy freemium model, and think asking for fifty bucks a month is totally fine even thought two dozen things an hour all think it’s totally fine if they also charge you fifty bucks etc etc etc. Most of these companies will vanish in six months, maybe along with your data. And the big problem with this is the same problem I have with mind map software or outline software or note-taking software: I could spend all day every day in decision paralysis with these tools that do maybe 47% of what I want but not everything I want. Or I could just write.

* * *

The third thing, which is a big one, is that there’s this debate on if these things can write fiction, and people feed things into ChatGPT and ask it to write a poem about Barbie just like Emily Dickinson or whatever. Some of these are marginally impressive, but I’m not convinced. It seems like this is more of a parlor trick, and not really effective for creative writing.

I think writing will end up seeing a reaction similar to what happened when photography was invented. Portrait painting was essentially democratized and the art form basically destroyed, as anyone could sit down for a photograph. And Romanticist and Neoclassical painting fell out of favor, styles that were dependent on highly realistic painting.

What happened was that Impressionism suddenly appeared. Painting traded fine detail for the exploration of capture or representation of movement in a scene. The idea of realistic detail was gone; instead of the capture of fixed images like any camera could do, it focused on giving up detail and looked into different representations of color or light. I’m not an art historian and I’m leaving a lot out of this description, but the gist here is that one thing Impressionists were thinking about is what they could do on canvas that a camera couldn’t do on a silver plate.

The next step beyond that was modern art, which decided to completely question what was art and what rules applied. There’s way too much history to capture this in a paragraph or two, but the general idea is that artists started finding ways to turn inward and attempt to express feelings and emotions and capture them in medium that was entirely not photorealistic. Instead of using a picture to evoke the actual details captured, it used it to evoke a mood or an idea.

The thought here is that artists at the turn of the 20th century saw that portraiture was obsolete, so they moved on to something cameras couldn’t do. Is that what’s next for writers?

* * *

So I’ve been screwing around with ChatGPT, using it to write summaries of books. It’s very good at this when it knows about the book. There’s a danger here: people write unique summaries for optimal SEO and store rankings, and pretty soon, everyone’s going to be writing absolutely identical book summaries, which is totally going to break SEO. This will become more and more weaponized, and the race to the bottom will continue on that.

Anyway, I asked ChatGPT to write a summary of my last book, The Failure Cascade. It wrote a well-formed and very convincing summary, but it was not at all for the book I wrote. However, the plot was very interesting, at least for a conventional linear “straight” book.

This made me wonder: where was it getting this plot? Did it have some cookie-cutter set of twenty book plots and it would mad-lib in a few things to make it look unique? Or was I reading a lightly retouched plot from a 1987 book by J.G. Ballard that I’d forgotten about or never read? My first thought was to steal this plot wholesale and start beating an outline and writing chunks. But I quickly decided nope, bad idea.

Fast-forward to this weekend, and I see an ad for this writing tool that offers to use AI to punch up outlines and brainstorm and make it super easy to “write” a book. (I won’t mention it by name, because I don’t want to get into it with them.)

So, I got a trial account. It started with a brain dump box, where I would enter my loose ideas. I pasted in the aforementioned book summary. It then wanted me to enter a genre, so I put “post-modern apocalyptic, bleak.”

It then wanted to determine a writing style, so I pasted in about a thousand words of the unpublished Atmospheres 2 manuscript. It chewed away for a moment, and it spit this out:

First-person, past tense. Sarcastic and cynical tone with informal vocabulary. Vivid and descriptive language, slow pacing, grotesque imagery. Reflects poverty, desperation, and decay in contemporary America.

OK, fair enough. I then had it generate a synopsis, a character list, and a chapter-by-chapter summary. The synopsis was a seven-paragraph thing and looked okay. The character list: okay, I guess.

The summary was five acts, nineteen chapters. It was radically formulaic, and each chapter was labeled something like “Rising Tension” or “All is Lost” or whatever Save the Cat-ism was needed at that point. The outline was not horrible, but it was so formulaic, it reminded me of like a Marvel movie totally written by committee.

I then told it to get started on the first chapter. It first broke out the three-sentence summary of the chapter into 19 beats. I then clicked the next button and it churned away and wrote the 3500 words of the first chapter.

The writing – the actual prose – absolutely horrific. I didn’t expect it to be like my writing, but it was amazingly bad. Stuff like:

“Jim,” Laura’s voice cracked as she stumbled over a charred piece of wood. “Do you think it’s always going to be like this?”

 

“Like what?” I replied, sarcasm dripping from my words like acid. “Hell on earth? Probably.”

The thing is – the writing was horrible, on like an eighth-grade level of wooden. But I think like 90% of the self-published Kindle genre page-turners I’ve read for five minutes and then deleted were about like this.

I went to generate chapter two, and I was out of coins or stars or tokens or whatever magic beans it used, and it wanted a credit card. Nope, the experiment ended for me.

I think my takeaway with this is that this tool might be good for generating a formulaic outline. And it would be great for dumping out extremely predictable Kindle dreck. Start with a good series bible, get a decent cover artist, and write your next dozen detective murder mystery books with ease. I guess this is great if you like that kind of book. Maybe not great if you spend a lot of time and make all your money writing this kind of book.

This type of tool is currently useless for writing anything expressionist or experimental. And as I look for a direction to go with my writing, it’s evident that I should be going in the exact opposite direction of genre fiction, and do things that absolutely couldn’t be done by an AI. Right?


Comments

One response to “AI, writing books automatically”

  1. Joey Junger Avatar
    Joey Junger

    Hubert Dreyfus—a philosophy professor at MIT—wrote a book on AI called “Mind over Machine,” that still feels like the last word on the subject to me. He broke learning down into five stages, and the fifth—the intuitive—was the most important, and also the one machines never come close to achieving. This fifth stage is where what you’ve learned from experience has become a kind of muscle memory. It’s why advice from great writers or great guitarists is so vague and unhelpful (usually.) There’s nothing they can tell you with words because the skill got synergized somewhere in mind and body while they worked all those years.

    Steve Pinker showed how much is between the lines in human consciousness—and thus unable to be read by computers—with this example:

    Woman: I want a divorce.
    Man: Who is he?

    Imagine a machine that diagrams sentences trying to correlate what the woman said with the man’s response. How does the direct object “divorce” become the predicate nominative “he” in the man’s response? We know automatically. Give a machine forever and it will probably never know.

Leave a Reply

Your email address will not be published. Required fields are marked *