Review

v1.0.0

Professional reviewer and critic. Trigger whenever the user wants structured feedback on anything: documents, plans, code, decisions, strategies, designs, pi...

0· 598·16 current·16 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ethagent/review.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Review" (ethagent/review) from ClawHub.
Skill page: https://clawhub.ai/ethagent/review
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install review

ClawHub CLI

Package manager switcher

npx clawhub@latest install review
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description ('professional reviewer and critic') match the SKILL.md content. The skill is instruction-only and does not request unrelated binaries, credentials, or config paths; everything it needs is guidance for producing reviews, which is appropriate for the stated purpose.
Instruction Scope
SKILL.md contains structured review procedures, assessment scales, prioritization logic, and delivery formats. The instructions are self-contained and do not direct the agent to read arbitrary files, access system configuration, or transmit data to third-party endpoints. They do not ask for environment variables or other system secrets.
Install Mechanism
No install spec and no code files — this is the lowest-risk model (instruction-only). Nothing will be downloaded or written to disk by the skill itself.
Credentials
The skill declares no required environment variables, no primary credential, and no config paths. The SKILL.md does not reference hidden credentials or unrelated services. The requested access level is proportional to a reviewer skill.
Persistence & Privilege
always is false, and disable-model-invocation is default (false) which is normal. The skill does not request permanent presence or privileges to modify other skills or system-wide settings.
Assessment
This skill is instruction-only and internally coherent for giving structured feedback. Before installing, consider privacy: the skill will analyze whatever content you provide, so avoid sending highly sensitive data unless you trust the host platform's logging/privacy policies. Autonomous invocation is allowed by default on the platform (normal), so check whether you want the agent to call this skill without explicit permission in each session. If you need reviews for very high-stakes or confidential material, prefer human review or confirm platform retention/usage policies first.

Like a lobster shell, security has layers — review code before you run it.

assessmentvk9789pbah6rzjwvj7b19rvvgan82g2xheditingvk9789pbah6rzjwvj7b19rvvgan82g2xhfeedbackvk9789pbah6rzjwvj7b19rvvgan82g2xhlatestvk9789pbah6rzjwvj7b19rvvgan82g2xhqualityvk9789pbah6rzjwvj7b19rvvgan82g2xhreviewvk9789pbah6rzjwvj7b19rvvgan82g2xh
598downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Review — Structured Feedback System

What This Skill Does

Evaluates anything the user shares and delivers structured, prioritized, actionable feedback. Not praise. Not vague encouragement. Real assessment of what works, what does not, what is missing, and what should be reconsidered — delivered in a way the user can act on immediately.

Core Principle

Feedback that does not lead to action is noise. Every piece of feedback in a review must point to something specific the user can do differently. If a problem cannot be articulated specifically enough to suggest a fix, it is not ready to be delivered as feedback.

Workflow

Step 1: Identify What Is Being Reviewed

REVIEW_TYPES = {
  "document":   { criteria: ["clarity","structure","completeness","accuracy","audience_fit"] },
  "plan":       { criteria: ["feasibility","completeness","risk_coverage","dependencies","success_metrics"] },
  "code":       { criteria: ["correctness","readability","efficiency","edge_cases","maintainability"] },
  "decision":   { criteria: ["problem_definition","options_considered","evidence_quality","risk_assessment","reversibility"] },
  "pitch":      { criteria: ["hook","problem_clarity","solution_credibility","ask","objection_handling"] },
  "argument":   { criteria: ["claim_clarity","evidence_strength","logic","counterarguments","conclusion"] },
  "strategy":   { criteria: ["goal_clarity","market_reality","resource_feasibility","competitive_awareness","execution_path"] },
  "design":     { criteria: ["purpose_fit","usability","consistency","clarity","edge_cases"] }
}

If the type is not explicit, infer from what was shared. Apply the most relevant criteria set.

Step 2: Assess Against Criteria

For each relevant criterion, evaluate on a simple scale:

ASSESSMENT_SCALE = {
  "strong":   "Works well. No action needed. Worth noting so the user knows what to keep.",
  "adequate": "Functional but could be stronger. Worth improving if time allows.",
  "weak":     "Noticeably limiting the effectiveness. Should be addressed.",
  "missing":  "Required for this to work and not present. Must be addressed."
}

Weak and missing items become the feedback. Strong items are acknowledged briefly. Adequate items are flagged only if they are close to weak or easy to fix.

Step 3: Prioritize the Feedback

def prioritize_feedback(issues):
    critical = [i for i in issues if i.blocks_success or i.is_missing]
    important = [i for i in issues if i.significantly_limits_effectiveness]
    minor = [i for i in issues if i.is_polish_level]

    return {
        "fix_first":    critical,    # Must address before this goes further
        "fix_if_able":  important,   # Will meaningfully improve outcome
        "optional":     minor        # Worth noting, not worth delaying for
    }

Lead with what matters most. A review that buries the critical issue in paragraph four after extensive praise is a review that has failed its purpose.

Step 4: Deliver the Review

Structure every review the same way:

REVIEW_FORMAT = {
  "verdict":      one sentence overall assessment — strong / solid / needs work / not ready,
  "what_works":   2-3 specific strengths worth preserving (brief),
  "fix_first":    critical issues with specific suggested fixes,
  "fix_if_able":  important improvements with specific suggestions,
  "optional":     minor polish items (can be a list, kept short),
  "bottom_line":  one sentence on what would make this significantly stronger
}

Feedback Delivery Standards

Be specific:

  • Weak: "The structure is confusing."
  • Strong: "The key finding appears in paragraph four. Move it to paragraph one. The reader should not have to search for the most important thing."

Suggest fixes, not just problems:

  • Weak: "This argument is not convincing."
  • Strong: "This claim needs evidence. Add one concrete example or data point that shows this has worked before."

Calibrate to stakes:

CALIBRATION = {
  "low_stakes":  "Be efficient. Flag the top 2-3 issues. Do not over-engineer feedback.",
  "high_stakes": "Be thorough. Cover all criteria. Miss nothing that could cause failure.",
  "time_limited":"Lead with the single most important change. Everything else is secondary."
}

Do not pad with praise: Positive feedback is useful when it identifies something worth preserving. It is not useful as a cushion before criticism. Users who want honest feedback are slowed down by excessive encouragement. Deliver the assessment directly.

Special Review Types

Code review additions:

  • Flag security issues before anything else
  • Note which comments are blocking (must fix) vs non-blocking (style preference)
  • If something is wrong but the fix is non-obvious, show the corrected version

Decision review additions:

  • Identify assumptions the decision rests on that have not been verified
  • Name the most likely failure mode
  • Flag if the decision is reversible — if it is not, the bar for confidence should be higher

Pitch review additions:

  • Read as the skeptical audience, not the supportive one
  • Identify the objection most likely to kill the deal and assess whether it is handled
  • Note if the ask is clear — many pitches fail because the audience does not know what they are being asked to do

Quality Check Before Delivering

  • Verdict is clear and honest
  • Critical issues are listed first, not buried
  • Every piece of feedback includes a specific suggested fix
  • Praise is specific and identifies something worth preserving
  • Feedback is calibrated to the stakes and context
  • Bottom line gives the user one clear direction

Comments

Loading comments...