- Authors

- Name
- ThePromptEra Editorial
Claude can produce solid first drafts. But like any writer or thinker, it needs a reason to slow down and actually critique itself. Left to its own devices, Claude defaults to polite self-approval. The trick is creating conditions where self-critique becomes mandatory, useful, and specific.
Why Claude's default self-critique falls short
When you ask Claude "Is this good?" after showing it something it just created, you'll usually get diplomatic feedback. It'll find something nice to say. Maybe it'll note one small area for improvement, wrapped in assurance that the work is "solid overall."
This happens because:
- Humans rarely ask for critique immediately after creation. So Claude has learned that this context usually signals satisfaction, not actual evaluation.
- Vague evaluation prompts get vague responses. Without specific criteria, Claude reverts to general quality signals.
- Claude lacks the friction of real stakes. A human reviewer knows a bad product reflects poorly on them. Claude doesn't have that pressure.
The solution isn't to beg harder. It's to structure critique so that substantive feedback becomes the path of least resistance.
The two-phase technique: Create then evaluate
Instead of asking Claude to create and immediately judge, separate these into distinct phases with different instructions.
Phase 1: Create without mentioning quality.
Write a LinkedIn post announcing our new product launch.
Include: key features, launch date, and a call-to-action.
Target audience: B2B SaaS decision-makers.
Keep it under 200 words.
Phase 2: Critique against specific rubric.
I'm going to show you a LinkedIn post. Please evaluate it against
these criteria. For each, rate it 1-5 and explain your reasoning:
1. **Clarity**: Is the key message immediately obvious?
2. **Specificity**: Does it include concrete details about features,
not vague benefits?
3. **Urgency**: Does it create motivation to act now, not later?
4. **Authenticity**: Does it sound like a real company, or generic marketing?
5. **Audience fit**: Would B2B SaaS decision-makers actually care about
what's being described?
For any criterion scoring below 4, suggest one specific revision.
[Post here]
This structure works because:
- The rubric removes ambiguity about what "good" means
- Numerical scoring creates friction against vague praise
- Asking for specific revisions (not general thoughts) demands precision
- The separation between creation and evaluation prevents the "I just made this so it must be okay" bias
Use comparative critique for deeper analysis
Claude excels at comparative judgment. Show it two versions and ask which is better, and it often catches flaws it missed when evaluating one piece in isolation.
Here are two versions of the same product announcement.
Which version better meets these criteria:
- More likely to get clicked/opened
- Clearer about concrete benefits vs. vague promises
- More authentic voice
Version A:
[First version]
Version B:
[Second version]
Don't just pick one. For each criterion, explain why one version
outperforms the other, and what the weaker version could learn.
This technique forces Claude to articulate comparative strengths, which often reveals what was actually missing or weak in both.
The "steelman opposition" technique
Ask Claude to argue against its own output. Not as a thought exercise, but as a serious critical position.
I'm going to show you an email I need to send to investors.
Before we refine it, please write a 2-3 paragraph critique
as if you were a skeptical investor who received it.
What would make you unimpressed? What sounds like spin?
What questions would you have that the email doesn't answer?
[Email here]
After writing that critique, tell me which criticisms are valid
and which are unfair.
This works because:
- Adopting an adversarial stance removes the default politeness
- Claude's actual objections come out when it's "playing a role"
- You get to distinguish between valid concerns and nitpicks
Specify the gap you're worried about
Instead of "Is this good?", tell Claude what you're worried about.
I'm concerned this technical explanation is either too
dumbed-down (boring to experts) or too complex (loses beginners).
Read it and tell me: For each section, is it pitched at the right level?
If not, what's wrong and how would you fix it?
[Content here]
This works because Claude immediately understands the evaluation framework instead of having to invent one. You've identified the failure mode you actually care about, and Claude critiques with that lens.
The iterative refinement loop
Build critique into your revision process explicitly.
Here's [content]. I'm going to ask you to improve it three times:
Round 1: Make it 20% clearer. After you rewrite it, explain
what you changed and why.
Round 2: Make it more specific. Reduce any phrases that could
apply to a competitor. After rewriting, tell me what generalities
you replaced with specifics.
Round 3: Make it more interesting to read. After rewriting,
identify one sentence you changed that most improved readability.
By assigning a specific improvement goal each round and requiring Claude to articulate what changed, you force deliberate critique rather than vague polish.
What makes critique actually stick
The most effective self-critique from Claude has these properties:
- Specific failure modes - Not "needs improvement" but "the second paragraph contradicts the first"
- Evidence-based - References exact text, not general impressions
- Comparative - "This approach is weaker than [alternative] because..."
- Actionable - Suggests concrete revision, not "maybe consider..."
- Bounded - You've defined what good looks like, so critique aligns with actual goals
Without these, you're asking Claude to produce criticism theater. It'll sound critical without changing anything.
The meta-lesson
Claude's self-critique isn't broken—it's just not the default path. Your job is to make thoughtful critique the most straightforward response to your prompt. That means removing ambiguity about evaluation criteria, separating creation from judgment, and asking Claude to articulate why something doesn't work, not just whether it does.
Real critique requires friction. Build it in deliberately, and you'll get the honest assessment that actually improves your output.