ia-receiving-code-review

v3.0.4

Process code review feedback critically: check correctness before acting, push back on incorrect suggestions, no performative agreement. Use when responding...

0· 246·0 current·0 all-time
byIlia Alshanetsky@iliaal

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for iliaal/compound-eng-receiving-code-review.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "ia-receiving-code-review" (iliaal/compound-eng-receiving-code-review) from ClawHub.
Skill page: https://clawhub.ai/iliaal/compound-eng-receiving-code-review
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install compound-eng-receiving-code-review

ClawHub CLI

Package manager switcher

npx clawhub@latest install compound-eng-receiving-code-review
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (receive and triage code review) match the SKILL.md and SPEC.md content. Required actions (verify suggestions, grep, git blame, check PR threads) are appropriate for this purpose. No unrelated services, credentials, or binaries are requested.
Instruction Scope
Instructions stay within code-review responsibilities (read, verify, evaluate, respond, implement). They reference calling repository APIs (example: gh api), using tools like AskUserQuestion, and dispatching to other ia-* skills; these are reasonable for automation but assume the agent environment has repo access and those tools available. The skill does not instruct reading unrelated system files or exfiltrating secrets.
Install Mechanism
No install spec and no code files — lowest-risk form. Nothing is downloaded or written to disk by the skill itself.
Credentials
The skill declares no environment variables or credentials. Its runtime guidance assumes access to repository data and possibly a GitHub CLI/token to call gh api, which is proportionate to code-review work but not explicitly requested.
Persistence & Privilege
always is false and the skill does not request permanent system-wide presence or modify other skills' configs. Autonomous invocation is allowed (platform default) but not excessive for a discipline skill.
Assessment
This skill is coherent for automated or assisted code-review workflows. Before installing, confirm your agent environment already has the repository access and any CLI/API tokens (e.g., GitHub access) you are willing to allow, since the instructions assume the ability to inspect diffs, run grep/git blame, and call PR comment APIs. If you do not want automated actions touching your repo or external APIs, restrict the skill's runtime permissions or require manual confirmation (AskUserQuestion) before it performs remote calls or implements changes.

Like a lobster shell, security has layers — review code before you run it.

latestvk976vrc5xk4enkx4jmsg4hg14s85nbbj
246downloads
0stars
11versions
Updated 14h ago
v3.0.4
MIT-0

Receiving Code Review

Core Principle

Verify before implementing. Technical correctness matters more than social comfort. A reviewer can be wrong -- blindly implementing bad suggestions creates bugs.

Response Pattern

For each piece of feedback, follow this sequence:

0. Prior feedback check (re-reviews only) -- if this is not the first review round, check whether previously flagged issues were addressed before processing new comments. Compare the current diff against prior review threads (gh api repos/{owner}/{repo}/pulls/{pr}/comments). Surface any that were ignored or only partially fixed -- these take priority over new feedback.

  1. Read -- Understand what's being suggested and why
  2. Verify -- Is the suggestion technically correct for THIS codebase?
  3. Evaluate -- Does it improve the code, or is it preference/style?
  4. Respond -- Agree with evidence, disagree with evidence, or ask for clarification
  5. Implement -- Only after verification confirms the suggestion is correct

Triage all feedback first (see Implementation Order below), then implement one item at a time. Don't batch-implement everything at once.

Handling Unclear Feedback

When feedback is ambiguous or incomplete:

  • Stop -- do not implement anything unclear
  • Clarify ALL unclear items before implementing ANY of them (they may be related)
  • Ask specific questions: "Are you suggesting X or Y?" not "Can you elaborate?"
  • If the reviewer's intent is clear but the technical approach is wrong, say so

Batched clarification for critical-path ambiguity: When multiple ambiguous findings land on critical-path code (auth, payments, data migrations, permission checks) AND the AskUserQuestion tool is available, batch up to 4 of them into a single call rather than asking one at a time. Each question's header is the truncated filename and line, and the options are Valid / False positive / Defer. Skip the batched ask entirely when ambiguous findings are only on non-critical paths — just auto-triage those and move on. If AskUserQuestion is not available, fall back to a single prose block listing all ambiguous items numbered, asking for Valid/False-positive/Defer decisions. The batching limit is 4 because it caps cleanly at that size; asking more becomes noise rather than judgment.

Source-Specific Handling

From the user (project owner)

  • Trusted context -- they know the codebase and business requirements
  • Implement after understanding, but still verify technical correctness
  • Ask clarifying questions when the intent is clear but the approach seems risky
  • No performative agreement -- just acknowledge and implement

From automated review agents

  • Skeptical by default -- agents lack full context
  • Verify every suggestion against the actual codebase
  • Check for YAGNI violations (agents love adding "just in case" code)
  • Discard suggestions that contradict project conventions (check CLAUDE.md)
  • Agents may flag things that are intentional design decisions -- check before changing

From external reviewers (PR comments, open source)

  • Verify technical correctness for THIS stack and codebase
  • Check if the suggestion applies to this version of the framework/library
  • Push back if the reviewer lacks context about architectural decisions
  • Distinguish between "this is wrong" and "I would do it differently"

When to Push Back

Push back (with evidence) when a suggestion:

  • Breaks existing functionality -- "This would break X because Y depends on Z"
  • Violates project conventions -- "Our CLAUDE.md specifies we do it this way because..."
  • Is technically incorrect -- "This API was deprecated in v3. We're on v4 which uses..."
  • Adds unnecessary complexity -- "This handles a case that can't occur because..."
  • Is unused (YAGNI) -- when a reviewer suggests "implementing properly", grep the codebase for actual usage FIRST. Zero callers? Suggest removal: "This endpoint isn't called. Remove it (YAGNI)?" If used, implement properly.
  • Conflicts with architectural decisions -- "We chose X over Y in the brainstorm because..."

Valid evidence: code references (file:line), test output, git blame/log, framework docs, reproduction steps, grep results showing usage patterns. Not evidence: "I think", "it should work", "it's fine", appeals to convention without citing the convention, or restating the original code as justification.

False-Positive Taxonomy (for dismissed suggestions)

When dismissing a suggestion (AUTO-DECLINE, manual push-back), tag the dismissal with one of four categories so the reviewer sees structured reasoning, not a bare "no":

CategoryReviewer's response citedEvidence requiredMaps to "When to Push Back"
FP-ASSUMPTIONReviewer assumed behavior that doesn't match the codeQuote the specific line that contradicts the assumption"Is technically incorrect"
FP-CONVENTIONSuggestion conflicts with this project's conventionsCite the CLAUDE.md rule, ADR, or the established pattern in file:line"Violates project conventions"
FP-ALREADY-HANDLEDThe concern is handled elsewhere (parent function, middleware, framework)Show the existing handler in file:line"Adds unnecessary complexity"
FP-OUT-OF-SCOPEValid concern but belongs in a separate changeState where it will be tracked (issue, todo, next PR)YAGNI / scope creep

Use the tag in the reply: "FP-ALREADY-HANDLED: null check happens in auth/middleware.ts:42 before this handler runs. Keeping as-is." Structured tags prevent the "you're wrong because reasons" reply pattern and make future triage faster (if the same comment class keeps hitting FP-CONVENTION, the convention needs better documentation).

When NOT to Push Back

Accept feedback when:

  • The suggestion is correct and you missed something
  • It catches a genuine bug or edge case
  • It improves readability without changing behavior
  • It aligns with project conventions you overlooked
  • The reviewer has domain expertise you lack

Common Mistakes

MistakeFix
Agreeing before verifyingVerify first, then state what you found
Implementing without understanding impactTrace the change through callers before editing
Apologizing instead of fixingState the correction factually, then implement
Thanking the reviewer instead of responding technicallyDelete "Thanks" -- state the fix instead
Pushing back without evidenceInclude the specific code path or test that proves your point
Batch-implementing then testingTest after each individual fix
Can't verify the suggestionSay so: "Can't verify this without [X]. Should I [investigate/ask/proceed]?" -- don't guess or implement blind

Approved Response Templates

When feedback IS correct: "Verified -- [specific issue]. Implementing [specific fix]." When feedback is partially correct: "The [X part] is right because [reason]. The [Y part] doesn't apply here because [evidence]." When you need clarification: "Can you clarify [specific ambiguity]? The comment could mean [A] or [B], which changes the fix."

Implementation Order

After triaging all feedback:

  1. Clarify -- resolve all unclear items first
  2. Blocking issues -- fix things that break functionality
  3. Simple fixes -- quick wins that are clearly correct
  4. Complex fixes -- changes that need careful implementation

Test after each individual fix, not after implementing everything.

When Your Pushback Was Wrong

State the correction factually: "Checked this, you're correct because [reason]. Implementing." No extended apology, no self-deprecation -- just acknowledge and move on.

GitHub PR Reviews

  • Reply in the inline comment thread, not as top-level PR comments: gh api repos/{owner}/{repo}/pulls/{pr}/comments -f body="..." -f in_reply_to={comment_id}
  • Reference specific lines when explaining why you disagree
  • Mark conversations as resolved only after the fix is verified
  • If a suggestion spawns a larger discussion, suggest moving it to an issue

Headless Mode

When invoked programmatically (by another skill or command with mode:headless), skip interactive prompts and return structured triage results. See headless-mode.md for the classification table (AUTO-FIX / AUTO-DECLINE / ESCALATE), output format, and constraints.

Scope vs ia-pr-comment-resolver Agent

This skill and the ia-pr-comment-resolver agent handle different situations:

This skill (interactive)This skill (headless)ia-pr-comment-resolver agent
WhenInteractive review requiring judgmentProgrammatic triage by another skill/commandImplementing a single pre-triaged comment
ApproachVerify, evaluate, potentially push backAuto-classify and return triage resultsImplement a single pre-classified change
SkepticismHigh -- check correctness firstHigh -- same rules, automated classificationLow -- comments are pre-triaged
Use forUnclear suggestions, architectural feedbackBatch triage before dispatching resolversClear-cut fixes, style nits, typos

When the ia-pr-comment-resolver agent encounters feedback that requires judgment (architectural decisions, debatable trade-offs), it should escalate rather than implement.

Examples

Bad -- performative agreement:

Reviewer: "This should use a Map instead of an object for better performance." Response: "Great catch! Switching to Map now."

Good -- verify first:

Reviewer: "This should use a Map instead of an object for better performance." Response: "Checked this -- the object has at most 5 keys (bounded by the enum). Map overhead isn't justified here. Keeping as-is."

Bad -- blind implementation:

Reviewer: "Add null checks for the user parameter." Response: "Done, added null checks."

Good -- verify the premise:

Reviewer: "Add null checks for the user parameter." Response: "Traced the call chain -- user comes from the auth middleware which guarantees non-null (throws 401 otherwise). Adding a null check here would be dead code. Should I add it anyway for defense-in-depth?"

Good -- unclear item:

Reviewer: "This logic seems off." Response: "Can you clarify which part? The filter predicate, the sort order, or the pagination logic?"

Verify

  • All feedback items triaged (classified as correct, incorrect, or unclear)
  • Ambiguous items clarified before any implementation
  • Each fix verified individually (test after each, not batch)
  • No performative agreement phrases in responses ("Great catch!", "Absolutely right")

Integration

This skill pairs with:

  • ia-code-review -- the outbound side (requesting reviews). Their action-routing tiers (safe_auto/gated_auto/manual/advisory) roughly map to this skill's AUTO-FIX / ESCALATE-for-approval / ESCALATE / FYI.
  • ia-pr-comment-resolver agent -- for mechanical PR comment resolution (see scope table above)
  • ia-verification-before-completion -- verify each fix before marking resolved

Comments

Loading comments...