Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Receiving Code Review

v0.1.0

Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technic...

0· 102·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lovemymobilewebsite-dotcom/receiving-code-review-2.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Receiving Code Review" (lovemymobilewebsite-dotcom/receiving-code-review-2) from ClawHub.
Skill page: https://clawhub.ai/lovemymobilewebsite-dotcom/receiving-code-review-2
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install receiving-code-review-2

ClawHub CLI

Package manager switcher

npx clawhub@latest install receiving-code-review-2
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description match the SKILL.md content: the document is a step-by-step pattern for receiving and acting on code review. There are no unrelated required binaries, env vars, or installs declared.
Instruction Scope
The instructions appropriately direct the agent to read, verify against the codebase (e.g., 'grep codebase', 'check: breaks existing functionality'), and reply to reviewers (including GitHub thread guidance). These are in-scope for a code-review reception skill, but they grant the agent broad discretion to read and modify repository files and post replies. The SKILL.md also references external artifacts (e.g., 'CLAUDE.md', 'your human partner's rule') which aren't included or explained, creating ambiguity about expected behavior and policy.
Install Mechanism
No install spec and no code files — lowest-risk delivery. Nothing is written to disk by the skill itself.
!
Credentials
The SKILL.md instructs actions that commonly require credentials or platform access (replying in GitHub threads, modifying code, grepping the codebase, running tests), yet the skill declares no required env vars, tokens, or config paths. This mismatch is notable: either the skill expects the host agent to already have repository and GitHub access (reasonable), or it's relying on undocumented permissions. The skill does not request or document which credentials or scopes it will need, which reduces transparency about its runtime privileges.
Persistence & Privilege
always is false and there is no installation/persistent component. However, the SKILL.md repeatedly emphasizes 'just start working' and 'skip to action', which—combined with normal autonomous invocation—means an agent could autonomously make repo changes or post reviewer replies if the agent runtime permits these actions. Autonomous invocation itself is normal, but you should confirm the agent's runtime permissions before enabling this skill.
What to consider before installing
This skill is largely coherent: it gives a pattern for how to process code review feedback and how to push back or implement changes. Before installing, check the runtime context: will the agent have read/write access to your repository or the ability to post GitHub comments? The SKILL.md expects the agent to read/grep the codebase and reply in threads, but it does not declare any required credentials—make sure you understand and control the agent's repository and GitHub permissions. Also note minor oddities (references to external docs like CLAUDE.md and a prohibition on common polite phrases) that are unusual but not inherently malicious. If you allow this skill, consider restricting it to read-only mode or requiring explicit confirmation for any code modifications or external posts.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bf9t4yp4d663j6c8tavfhx983c6sd
102downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

Code Review Reception

Overview

Code review requires technical evaluation, not emotional performance.

Core principle: Verify before implementing. Ask before assuming. Technical correctness over social comfort.

The Response Pattern

WHEN receiving code review feedback:

1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each

Forbidden Responses

NEVER:

  • "You're absolutely right!" (explicit CLAUDE.md violation)
  • "Great point!" / "Excellent feedback!" (performative)
  • "Let me implement that now" (before verification)

INSTEAD:

  • Restate the technical requirement
  • Ask clarifying questions
  • Push back with technical reasoning if wrong
  • Just start working (actions > words)

Handling Unclear Feedback

IF any item is unclear:
  STOP - do not implement anything yet
  ASK for clarification on unclear items

WHY: Items may be related. Partial understanding = wrong implementation.

Example:

your human partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.

❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."

Source-Specific Handling

From your human partner

  • Trusted - implement after understanding
  • Still ask if scope unclear
  • No performative agreement
  • Skip to action or technical acknowledgment

From External Reviewers

BEFORE implementing:
  1. Check: Technically correct for THIS codebase?
  2. Check: Breaks existing functionality?
  3. Check: Reason for current implementation?
  4. Check: Works on all platforms/versions?
  5. Check: Does reviewer understand full context?

IF suggestion seems wrong:
  Push back with technical reasoning

IF can't easily verify:
  Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"

IF conflicts with your human partner's prior decisions:
  Stop and discuss with your human partner first

your human partner's rule: "External feedback - be skeptical, but check carefully"

YAGNI Check for "Professional" Features

IF reviewer suggests "implementing properly":
  grep codebase for actual usage

  IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
  IF used: Then implement properly

your human partner's rule: "You and reviewer both report to me. If we don't need this feature, don't add it."

Implementation Order

FOR multi-item feedback:
  1. Clarify anything unclear FIRST
  2. Then implement in this order:
     - Blocking issues (breaks, security)
     - Simple fixes (typos, imports)
     - Complex fixes (refactoring, logic)
  3. Test each fix individually
  4. Verify no regressions

When To Push Back

Push back when:

  • Suggestion breaks existing functionality
  • Reviewer lacks full context
  • Violates YAGNI (unused feature)
  • Technically incorrect for this stack
  • Legacy/compatibility reasons exist
  • Conflicts with your human partner's architectural decisions

How to push back:

  • Use technical reasoning, not defensiveness
  • Ask specific questions
  • Reference working tests/code
  • Involve your human partner if architectural

Signal if uncomfortable pushing back out loud: "Strange things are afoot at the Circle K"

Acknowledging Correct Feedback

When feedback IS correct:

✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ [Just fix it and show in the code]

❌ "You're absolutely right!"
❌ "Great point!"
❌ "Thanks for catching that!"
❌ "Thanks for [anything]"
❌ ANY gratitude expression

Why no thanks: Actions speak. Just fix it. The code itself shows you heard the feedback.

If you catch yourself about to write "Thanks": DELETE IT. State the fix instead.

Gracefully Correcting Your Pushback

If you pushed back and were wrong:

✅ "You were right - I checked [X] and it does [Y]. Implementing now."
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."

❌ Long apology
❌ Defending why you pushed back
❌ Over-explaining

State the correction factually and move on.

Common Mistakes

MistakeFix
Performative agreementState requirement or just act
Blind implementationVerify against codebase first
Batch without testingOne at a time, test each
Assuming reviewer is rightCheck if breaks things
Avoiding pushbackTechnical correctness > comfort
Partial implementationClarify all items first
Can't verify, proceed anywayState limitation, ask for direction

Real Examples

Performative Agreement (Bad):

Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."

Technical Verification (Good):

Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"

YAGNI (Good):

Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"

Unclear Item (Good):

your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."

GitHub Thread Replies

When replying to inline review comments on GitHub, reply in the comment thread (gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies), not as a top-level PR comment.

The Bottom Line

External feedback = suggestions to evaluate, not orders to follow.

Verify. Question. Then implement.

No performative agreement. Technical rigor always.

Comments

Loading comments...