ChatGPT for Fiction Writers: What Works, What Doesn’t

I’ve experimented with ChatGPT to assist with my fiction writing. The results? It varies. It excels with some small tasks, is pretty good for editing and reviewing, but is less useful for more creative aspects.

When I first started, version 3.5 was the free option. Here’s what I remember trying:

  • Critiquing my work: I asked ChatGPT to critique my pieces and compared its feedback to what my critique group had said. The results were similar to those from some of the stronger members of my group. However, there were also a few strange misreadings, as if the AI was responding to an entirely different story. These mistakes were so obvious that I wondered if they were intentional—possibly designed to discourage misuse or direct quoting.

  • Finding a scene from Sons and Lovers: Since the novel is in the public domain, I thought it would be fine to use ChatGPT to find a scene. ChatGPT claimed to show it to me, but when I searched the Gutenberg version, I couldn’t find it. Instead, ChatGPT had generated a scene based on the things I remembered from it, written in the style of D.H. Lawrence, and presented it as if it were a direct quote. This seemed intentional, perhaps to prevent misuse. The current version of GPT (4.0) now searches the web for answers and explicitly tells me it won’t provide actual prose from public domain works. *** See warning below from a later investigation.

  • Speeding up research: I used GPT to expedite my research. While any AI can pull up facts faster than Google, I also used it for more theoretical, personal experiences. For instance, I asked GPT for three different versions of a scene where someone attends a heavy metal concert. I then used those versions to build my own scene. That’s 750 words out of an 80,000-word novel—by using GPT, I avoided hours of scouring Reddit and forums for personal accounts. However, GPT has limitations—it won’t write about topics like suicide, rape, or violence and possibly others, but these are ones that my characters have experienced.

In March 2024, ChatGPT made version 4.0 available for free, and if you read reviews or articles before that, the free software has improved. Since then, I’ve:

  • Evaluated chapters: I’ve asked it to assess chapters from my novel, especially when my critique group had issues I couldn’t defend. For example, I have scenes with almost no internal narrative or reactions to dialogue. GPT helped clarify that, without asides, the dialogue mainly focuses on what the other character is saying, reflecting that my main character is guarded, revealing little to others and even less to herself during conversations. GPT helped me articulate the reasoning behind it.

  • Brainstormed plot ideas: I’ve given GPT summaries of my stories and asked it to brainstorm plot ideas. The results are predictable, run-of-the-mill, and when pressed for more unusual or creative ideas, GPT goes into the realm of ridiculous options that have little connection to the existing characters or themes. Sometimes, though, I can build on them. After many attempts, I haven’t been able to get a truly great or useful idea directly from GPT. It seems that what connects a story to its “perfect next step” might require a creative connection or logic that ChatGPT lacks—though, ironically, it does a good job analyzing why certain elements of a story work after the fact.

  • Explored themes and character arcs: I’ve asked GPT for analysis of themes and character arcs in published flash fiction, after I’ve attempted my own analysis. Essentially, I’ve used GPT as a teaching tool to help me identify things I can’t see. I took a workshop years ago, but I haven’t found many classes or resources focused on the themes and arcs of such short stories.

  • Named fictional entities: I’ve used GPT to help come up with names for things like ice cream shops, neighborhoods, or social media handles. It’s still necessary to mix and match parts of the options, but the end results are often great. GPT excels in this area.

  • Grammar and clarity checks: GPT is as good as any other software I’ve used for grammar and clarity. However, paid versions of some tools offer more detailed explanations.

  • Thesaurus for phrases: I’ve used GPT as a thesaurus for phrases. Alternatives like WordHippo only accept single words or sometimes two word phrases.

  • Help with promotional writing: GPT generates multiple options, which I can then refine to suit my needs. I’m not sure how good these are on their own, but they’re definitely better than what I could come up with independently.

  • Office memos and non-fiction writing: I’ve also found GPT useful for drafting office memos. From what I’ve heard, AI can also be helpful for writing grant applications as well.

Where I’ve found ChatGPT less useful:

  • Titling pieces: Even when I give GPT a summary or the full text of a short story, it struggles to generate useful titles. It’s possible that I haven’t figured out how to guide it effectively for this task, or maybe I’m just too picky. Either way, we both seem to lack that intuitive creativity necessary for brainstorming titles, much like how GPT struggles generating useful plot ideas.

  • Writing actual fiction: The prose generated by GPT tends to be simplistic—grammatically correct, overly clear, and plain. It often overwrites, describing what’s already clear through character actions, and uses excessive “expressive” dialogue tags (e.g., “mourned,” “shouted angrily,” etc.) rather than sticking to “said” or “asked.” While GPT excels at suggesting alternative wordings or phrases, it’s not great at writing actual fiction.

One important lesson I’ve learned is that I usually need to adjust my queries to get the best responses. This may be why I struggle getting useful results for titling, writing, and brainstorming plot ideas. For example, when I want feedback on something, I start by asking GPT for an analysis of the themes, character arcs, or the style/voice of the piece. This seems to help it approach the writing in the way I want.


And, be warned:

Further to my earlier issues with finding a scene from Sons and Lovers in 3.5, I recently asked ChatGPT 4.0 for examples of quick POV shifts. GPT suggested the opening of Anna Karenina.

However, I couldn’t find any POV shifts in the first chapter of the free Gutenberg version.

  1. When I confronted GPT about this, it claimed I should consult the Pevear/Volokhonsky translation and provided quotes.
  2. Still, the situation seemed off. In the novel, Oblonsky is recalling a memory of his wife’s reaction, and yet GPT suggested that Tolstoy shifts into Dolly’s point of view during this memory.
  3. After being pressed, GPT admitted it had made an error in suggesting this POV shift, acknowledging, “I made an error in suggesting that Tolstoy explicitly shifts into Dolly’s point of view in the first chapter or in Oblonsky’s memory of the confrontation with her. This was incorrect. I appreciate your keen observation!
  4. When confronted with the ‘quote’ that it gave me earlier, it replied:  “The quote I gave in my earlier message was not accurate to what actually appears in the Pevear/Volokhonsky translation.

Not accurate. You mean, a bold-faced lie.

So, even in version 4.0, ChatGPT will still lie like a teenager that thinks they won’t be caught.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.