Discussion about this post

User's avatar
mikolysz's avatar

If we cloned the best human writer a billion times, they'd very quickly stop being the best writer. We would all get used to (and tired of) their stylistic quirks and patterns. This is basically how LLMs work, and I feel like that's an underappreciated reason for why people hate LLM writing so much.

I really loved the writing style (particularly in educational / explanatory contexts) of Gemini 2.5 Pro. For a week. After that week, "imagine a football field" really got old.

There's a serious tradeoff here. If you train your LLM to have its own distinct personality, voice and style, people will quickly start to recognize those characteristics and get tired of them. If you make the LLM bland... well, you have a bland LLM.

This problem also comes up in other contexts. If you ask your LLM for a fantasy character, you'll get a gnome named Pip. If you ask it for a sci-fi story, you'll get the same two or three character names (and they even repeat across providers!) LLMs just aren't that good at randomness, and they tend to repeat the same patterns over and over again, unless specifically prevented from doing so. This can be fixed by injecting randomness into the LLM's context.

We don't just need an LLM which is good at writing, we need an entire writing agent. Something that would identify the aspects of the prompt that need to be randomized, generate the options for them (LLMs will give you much more of a long-tail distribution if you ask for 150 examples of x and then randomly pick one), and then inject the choices into the writing prompt.

Expand full comment
John Dvorak's avatar

AI writing is often sterile. I compare it to someone who mastered the “rules” of freshman composition (e.g., never split an infinitive, begin a sentence with a conjunction, or use a sentence fragment). But great writers know when to “break” these rules to great effect. And therefore communicate the precise voice you’re describing.

Expand full comment
18 more comments...

No posts

Ready for more?