Novelcrafter

How Good is Sonnet 4.5 for Writing Fiction?

Reading time
3 min read
Written by
by Kate
The Anthropic logo on a pale orange background

Whenever a new AI model is released, there’s always that little buzz of excitement. Is this the one? Will it finally get my character’s voice right? This week, I spent an afternoon testing the new Claude Sonnet 4.5 on-stream. My goal was to create a standardized set of tests that I could run every time a new model comes out, so that in future posts we can compare back to Sonnet 4.5.

Tests

Show, Don’t Tell

I asked Sonnet 4.5 to write a scene about a character receiving devastating news, but with a catch: no emotional words allowed. Whilst the model could produce decent prose with simple ‘show don’t tell’ instructions, the reactions felt a little melodramatic… but as soon as examples were provided in my prompt, the output became more nuanced, conveying shock through subtle physical reactions rather than emotional exposition. Sonnet 4.5 seems to reward you for being specific. The more guidance you provide, the better the results. Providing examples of ‘good’ and ‘bad’ versions works even better.

Dialogue Without Tags

I asked Sonnet 4.5 to write a conversation between three characters, but with dialogue only. No prose, no dialogue tags. Just the conversation itself. Sonnet 4.5 handled this test very well. The scene was clear, easy to follow, and each speaker felt distinct through their speech patterns alone. There was minimal confusion about who was talking.

Genre-Hopping

I asked the model to write about the same character, Ashley, in four different genres: hardboiled noir, romance, literary fiction, and thriller. Each piece felt genuinely distinct, with the hardboiled noir standing out as particularly sharp and concise, however the thriller was the weakest, and would need more specific prompting to create a strong sense of suspense.

Voice Mimicry

This was the test I cared about most. Could Sonnet 4.5 continue my own story in a way that felt seamless? I gave it a 400-word excerpt with a very specific, fragmented style and asked it to continue.

To better understand its capabilities, I ran the same experiment with GPT-5 and Opus 4.1, the two models I had previously liked best with this experiment. Here’s what I found:

  1. Sonnet 4.5 produced a competent continuation. It understood the setting and the plot, but it smoothed out too many of the rough edges. My original prose was punchy and urgent; Sonnet’s version was more polished and flowing than my prose, losing some of that raw energy. It’s a solid starting point that follows instructions but would need editing to restore the original voice.

  2. Opus 4.1 struggled more with the style, producing prose that felt too literary and gentle for the source material. Interestingly, while the voice was off, it captured the feel of the world better than Sonnet. It felt more authentic to the setting, even if the words weren’t right. For five times the price, however, a better ‘world-feel’ isn’t enough to justify a full rewrite of the prose.

  3. GPT-5 was the clear winner in terms of pure style mimicry. It nailed the fragmented rhythm and dry humour perfectly. It sounded exactly like my writing. But it had a major flaw: it tried to take over the story. Instead of just continuing the scene, it veered off-plot and tried to write its own conclusion. As a writer, I want a collaborator, not a replacement. GPT-5 would need much more close guidance and explicit instruction to use on a day-to-day basis.

In the end, it comes down to a practical choice. Sonnet 4.5 strikes the best balance: it’s not a perfect mimic, but it’s a reliable collaborator that follows instructions and gives me a solid, controllable foundation to work from.

I will give the caveat that these results are for one particular style of writing. If you are writing a romance, or prefer a more descriptive prose style, then it is likely that Sonnet or Opus may score more highly for you.

See the test results for yourself

The Quirks

No AI model is perfect, and Sonnet 4.5 has its share of quirks:

  • Wordiness: This is the biggest issue. When asking it to rewrite a scene, I sometimes ended up with double the word count, so I needed to edit for conciseness.
  • AI tropes: The model leans on certain patterns. During testing, I saw multiple appearances of a character named “Marcus” (an old friend of ours!) and several other terms.
  • Em dash enthusiasm: Like many AI models, it loves its em dashes. That said, the usage was generally grammatically appropriate (for interruptions or pauses), so it’s more of a stylistic preference issue than an AI red flag.

Finally, I’m not sure if this is good or bad, but the model followed my instructions a little too closely at times. It used the examples I gave and repeated patterns in the prose. This is likely why I like it so much for style mimicry, so you may have a different experience.

My Final Take

Claude Sonnet 4.5 is an excellent, versatile model for everyday tasks. Its a definite improvement over Sonnet 4. It responds well to clear direction and shines brightest when it’s given a specific style to follow. I’ll certainly be keeping it in my toolkit. If you’re willing to invest time in crafting good prompts by providing examples, context, and specific direction, Sonnet 4.5 will reward that effort with output that feels increasingly like your own work.

The more I test these models, the clearer it becomes that using AI effectively means finding the right balance between quality, cost, and personal workflow. Sonnet 4.5 nails that balance. For now.

Profile image of Kate

Based in the UK, Kate has been writing since she was young, driven by a burning need to get the vivid tales in her head down on paper… or the computer screen.