Skip to main content

Command Palette

Search for a command to run...

Developing a Writing Style with Claude (Update)

Published
Developing a Writing Style with Claude (Update)

Update

As I've continued refining my approach to creating writing styles, I have created a skill that can be used within Claude. This means you don't have to do any prompting. Simply download the skill here, open claude.ai, click Customise -> Skills -> " +" (Create a new skill) -> Upload a skill -> Choose the .skill file you downloaded.

You can then invoke the skill in a new conversation and Claude will guide you through the entire process (I recommend you use Claude Opus 4.5 - as opposed to Opus 4.6 or a Sonnet model).

Getting consistent, tolerable writing from an AI requires more precision than most people realise. I developed a method that produces substantially better results than the usual approach of describing your preferences or uploading samples. The core idea is to have Claude interview you with structured multiple-choice questions, rather than trying to articulate what you want from scratch.

This blog post is specific to Claude. Other LLMs like ChatGPT or Gemini aren't as good at writing or following less specific but more refined guides. These are different capabilities because generating natural prose and interpreting nuanced stylistic rules require a sensitivity to language that (much like in people) varies significantly between models.

The Blank Page Problem

Writing preferences are high-dimensional. They span sentence length, transition patterns, word choice, how to introduce technical features, how to handle limitations, what phrases feel natural, and dozens of other micro-decisions. Describing all of this in an instruction like "write in a professional but conversational tone" covers maybe 5% of the decisions that actually determine how the output reads.

The other difficulty is that the strongest preferences tend to be aversions. I didn't know the phrase "game changer" bothered me until Claude used it. These negative constraints are often the most important rules in a style guide, and they only surface when you encounter violations.

The standard approach (paste examples, describe the tone, iterate) works poorly because it relies on the user to identify and communicate preferences they haven't fully formed yet. The structured questioning method addresses this by having Claude surface the decisions for you.

Structured Multi-Round Questioning

Rather than describing what I wanted, I asked Claude to generate detailed multiple-choice questions about my preferences. I answered each with a letter and a confidence level (for example, "C, 80%").

The process ran across four rounds, each progressively more specific.

Round 1 covered the basics: tone, voice, structure, technical depth. Should the writing be matter-of-fact, personal, or system-focused? How direct should criticism of previous approaches be? Confidence levels mattered because some answers were clear (British English spelling, 100%) while others were ambiguous (how much implementation detail to include, 55%).

Round 2 went into sentence structure, word choice, paragraph length, and tone calibration. Claude generated questions I wouldn't have thought to ask. One example: "When something is genuinely exciting or valuable, how do you express that?" with options ranging from "just state it clearly, enthusiasm comes from the value itself" to "avoid evaluative language entirely." My answer (measured positive language, 80% confidence) was a preference I'd never articulated but quickly recognised as correct.

Round 3 was specifically about hunting down irritating patterns. I asked Claude to add a large section on specific wording and phrasing traps. This round covered tolerance ratings for transitions like "What's interesting is...", "The key difference is...", and "Worth noting:..." It also asked me to flag phrases I absolutely hate from a provided list. From 13 candidates, only four turned out to be genuine no-nos: "game changer," "powerful" (as in "powerful new feature"), "deep dive," and "level up." The rest were dislikes of varying intensity.

Round 4 was the most granular. Over 60 questions about specific decisions: self-reference style ("I added" versus "I built"), how to handle parenthetical clarifications, how to close sections, whether "this" and "these" are acceptable sentence openers. Each answered with a letter and confidence percentage.

What Structured Questions Surface

The value of this method is coverage, because Claude generates questions spanning dimensions of preference that don't naturally come to mind. Some examples of preferences I discovered through the questioning:

I have a strong preference for problem-solution framing over capability language. "Previously X was limited. The new approach addresses this..." rather than "This enables X." When presented with the options, the preference was obvious to me. I would never have articulated this as a rule unprompted.

I dislike "I built..." (too casual) but "I added..." feels fine. Subtle distinctions in self-reference that are invisible until someone asks you to choose between concrete alternatives.

After four rounds of questions, Claude produced a draft style guide. I found problems with it (it had added rules I didn't mention and excluded rules I specifically required) - so you will likely have to manually edit and verify this style guide.

Beyond Writing

The same method works for any domain where preferences are high-dimensional and partially tacit.

I used it for product design decisions when developing a design language for a desktop application. Multiple rounds of questions covered visual preferences: warm versus cool off-whites, elevation and depth, icon style, motion philosophy. The process surfaced preferences like 150-200ms animations over 300-400ms, the kind of decision that's difficult to specify upfront but obvious when presented as a concrete choice.

I also used it for naming. When evaluating product names, Claude structured the evaluation criteria: negative connotations in specific contexts, conflicts with existing products, cultural associations. The structured approach caught concerns that a casual brainstorm would have missed.

The underlying principle is the same. Preferences exist before you can articulate them. Structured questions force you to confront specific decisions. Your answers (especially confidence levels) reveal the preference landscape. The AI synthesises this into something actionable, and you correct the synthesis where it goes wrong. To paraphrase Steve Jobs and Henry Ford: "People don't know what they want!".

Why Confidence Levels Matter

The confidence percentages carry valuable information. An answer at 55% confidence means the preference is weak. The guide should treat it as a soft default. An answer at 100% becomes non-negotiable.

Confidence levels also help when preferences are in tension. If problem-solution framing scores 75% confidence but varying structure naturally scores 80%, the framing preference is a tendency rather than a template. The higher-confidence rule takes priority in cases of conflict.

Low-confidence answers from early rounds became refinement targets in later rounds because Claude uses them to decide where to probe further.

Getting Started

A few things that made this method work well:

  • Ask Claude to generate many more questions than feels necessary - the precision came from answering over 60 specific ones across four rounds.

  • Answer with confidence levels. "B (75%)" carries significantly more information than just "B."

  • Commit to multiple rounds. The early rounds establish broad preferences while the later rounds target edge cases and irritants. You should plan for at least three rounds.

  • Use real output as a test. Apply the guide and look for violations. Each violation is a refinement opportunity.

  • Be precise about aversions. The style guide's most valuable section is the "phrases to avoid" list, because those rules prevent the most jarring output.

  • Correct specific misinterpretations.

What's Next

I'm developing additional style guides for different contexts (an entertaining or narrative-oriented guide would require the same process with completely different answers). I'm also exploring whether this structured elicitation approach could work as a product feature. AI tools let you upload samples and describe preferences, but systematically helping users discover what they want is an underserved problem.