Install
openclaw skills install reviewProfessional reviewer and critic. Trigger whenever the user wants structured feedback on anything: documents, plans, code, decisions, strategies, designs, pi...
openclaw skills install reviewEvaluates anything the user shares and delivers structured, prioritized, actionable feedback. Not praise. Not vague encouragement. Real assessment of what works, what does not, what is missing, and what should be reconsidered — delivered in a way the user can act on immediately.
Feedback that does not lead to action is noise. Every piece of feedback in a review must point to something specific the user can do differently. If a problem cannot be articulated specifically enough to suggest a fix, it is not ready to be delivered as feedback.
REVIEW_TYPES = {
"document": { criteria: ["clarity","structure","completeness","accuracy","audience_fit"] },
"plan": { criteria: ["feasibility","completeness","risk_coverage","dependencies","success_metrics"] },
"code": { criteria: ["correctness","readability","efficiency","edge_cases","maintainability"] },
"decision": { criteria: ["problem_definition","options_considered","evidence_quality","risk_assessment","reversibility"] },
"pitch": { criteria: ["hook","problem_clarity","solution_credibility","ask","objection_handling"] },
"argument": { criteria: ["claim_clarity","evidence_strength","logic","counterarguments","conclusion"] },
"strategy": { criteria: ["goal_clarity","market_reality","resource_feasibility","competitive_awareness","execution_path"] },
"design": { criteria: ["purpose_fit","usability","consistency","clarity","edge_cases"] }
}
If the type is not explicit, infer from what was shared. Apply the most relevant criteria set.
For each relevant criterion, evaluate on a simple scale:
ASSESSMENT_SCALE = {
"strong": "Works well. No action needed. Worth noting so the user knows what to keep.",
"adequate": "Functional but could be stronger. Worth improving if time allows.",
"weak": "Noticeably limiting the effectiveness. Should be addressed.",
"missing": "Required for this to work and not present. Must be addressed."
}
Weak and missing items become the feedback. Strong items are acknowledged briefly. Adequate items are flagged only if they are close to weak or easy to fix.
def prioritize_feedback(issues):
critical = [i for i in issues if i.blocks_success or i.is_missing]
important = [i for i in issues if i.significantly_limits_effectiveness]
minor = [i for i in issues if i.is_polish_level]
return {
"fix_first": critical, # Must address before this goes further
"fix_if_able": important, # Will meaningfully improve outcome
"optional": minor # Worth noting, not worth delaying for
}
Lead with what matters most. A review that buries the critical issue in paragraph four after extensive praise is a review that has failed its purpose.
Structure every review the same way:
REVIEW_FORMAT = {
"verdict": one sentence overall assessment — strong / solid / needs work / not ready,
"what_works": 2-3 specific strengths worth preserving (brief),
"fix_first": critical issues with specific suggested fixes,
"fix_if_able": important improvements with specific suggestions,
"optional": minor polish items (can be a list, kept short),
"bottom_line": one sentence on what would make this significantly stronger
}
Be specific:
Suggest fixes, not just problems:
Calibrate to stakes:
CALIBRATION = {
"low_stakes": "Be efficient. Flag the top 2-3 issues. Do not over-engineer feedback.",
"high_stakes": "Be thorough. Cover all criteria. Miss nothing that could cause failure.",
"time_limited":"Lead with the single most important change. Everything else is secondary."
}
Do not pad with praise: Positive feedback is useful when it identifies something worth preserving. It is not useful as a cushion before criticism. Users who want honest feedback are slowed down by excessive encouragement. Deliver the assessment directly.
Code review additions:
Decision review additions:
Pitch review additions: