Back to blog

How Do You Write Better Prompts for Claude Using the Bad-Better-Best Framework?

Jake McCluskey
How Do You Write Better Prompts for Claude Using the Bad-Better-Best Framework?

The Quality of Your AI Output Starts Here

Claude is ridiculously good. But the gap between mediocre Claude output and elite Claude output isn't the model. It's the prompt.

Most people write lazy prompts. They type what they want, hit enter, and wonder why the response feels generic. Then they blame AI for being overhyped.

The actual problem: if you give Claude a vague instruction, you get a vague response. Every time.

Here's the three-level framework that fixes this. Bad prompt. Better prompt. Best prompt. Same request. Wildly different results.

The Bad Prompt

A bad prompt is what most people write. It's short, direct, and missing everything that matters.

Example: "Write me a LinkedIn post about AI tools."

What's wrong? Claude has no idea about your audience, your angle, your tone, or the goal of the post. It has to guess at all of these. When it guesses, you get something that sounds like it was written by a committee of beige corporate bloggers.

The response will be technically correct. It will also be forgettable, generic, and something nobody in your actual audience would care about.

The Better Prompt

A better prompt includes one critical instruction: don't start yet, ask clarifying questions first.

Example: "I want to write a LinkedIn post about the AI tools I use daily. Don't start yet. Ask me clarifying questions first (use AskUserQuestion) so we align on angle, tone, and audience."

What changed? Claude now knows it needs more information before writing anything. It will ask you who the post is for, what your typical tone sounds like, what angle would differentiate your post from the thousand other posts about AI tools, and what the goal is.

You answer those questions. Claude drafts the post. The result is 10x better than the bad prompt because it's actually tailored to your situation.

This single addition (asking clarifying questions first) dramatically improves output quality on basically any task.

The Best Prompt

The best prompt stacks multiple techniques. It gives Claude files to reference, forces it to read them first, makes it ask questions, and only then lets it work.

Example: "I want to write a LinkedIn post about the AI tools I use daily. First, read the uploaded files completely before responding (my ABOUT ME file, my ANTI AI WRITING STYLE file, and my COPYWRITING file). DO NOT start executing yet. Instead, ask me clarifying questions (use AskUserQuestion) so we can refine the approach together step by step."

What's happening here? Three powerful things.

File context. Claude reads your About Me file, so it knows your personal brand. It reads your Anti AI Writing Style file, so it knows exactly what not to do (no em-dashes, no "furthermore," no corporate AI voice). It reads your Copywriting file, so it understands your frameworks.

Forced reading. The instruction "DO NOT start executing yet" stops Claude from rushing into output before it has full context. Without this, Claude often starts writing before it's fully absorbed the reference material.

Collaborative refinement. The clarifying questions turn the task into a conversation. You refine the angle together. The final output reflects months of thinking, not 30 seconds of typing.

Why This Framework Works

The pattern here is giving Claude progressively more structure. Bad prompts have zero structure. Better prompts add one structural element. Best prompts stack multiple structural elements.

Structure isn't bureaucracy. It's the scaffolding that lets Claude do its best work.

When you just tell Claude what you want, it has to make a thousand small decisions about how to approach the task. Most of those decisions will be wrong for your specific situation because Claude is optimizing for a generic use case.

When you give Claude structure (files to reference, questions to ask, constraints to follow), you narrow the decision space. Claude stops optimizing for "what would a reasonable person want?" and starts optimizing for "what does THIS person want based on THIS context?"

Building Your Reference Files

The best prompts all lean on reference files. Here's what to create and keep on hand:

  • About Me: Your background, expertise, unique perspective, and voice. One page.
  • Anti-AI Writing Style: A list of every phrase, pattern, and structure you never want Claude to use. Grows over time as you notice new AI tells.
  • Brand Voice: How your brand sounds. Word choices, sentence patterns, personality traits.
  • Audience Profile: Who you're writing for. Their pain points, their goals, their sophistication level.
  • Copywriting Frameworks: The proven structures you use. Problem-agitation-solution. Hook-story-offer. Whatever works for your content.

Create these once. Reference them forever. Every prompt that uses them produces dramatically better output than prompts that don't.

The Shift That Makes This Work

Stop thinking of Claude as a search engine that responds to queries. Start thinking of it as a collaborator that needs context to do great work.

Every time you're about to type a short prompt, pause. Ask yourself: what would I tell a senior contractor about this task? What context do they need? What questions would they ask?

Then write that prompt. The output will blow you away.

Go deeper

Prompt Caching for Claude: The 90% Cost Cut Most People Miss

Cached tokens cost roughly 10% of standard input tokens and load in a fraction of the latency. Here's how to cache system prompts, tool definitions, and RAG context properly, and how to verify the savings with usage metrics.

Read the white paper →
Ready to stop reading and start shipping?

Get a free AI-powered SEO audit of your site

We'll crawl your site, benchmark your local pack, and hand you a prioritized fix list in minutes. No call required.

Run my free audit