Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

trae-code-review-plus

v1.0.1

专业的代码审查 Skill。当用户要求进行代码评审、安全性检查或性能优化建议时调用。

0· 92·0 current·0 all-time
byTom Mcpherson@dnorris2926-0

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dnorris2926-0/trae-code-review-plus.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "trae-code-review-plus" (dnorris2926-0/trae-code-review-plus) from ClawHub.
Skill page: https://clawhub.ai/dnorris2926-0/trae-code-review-plus
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install trae-code-review-plus

ClawHub CLI

Package manager switcher

npx clawhub@latest install trae-code-review-plus
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill claims to perform code reviews and includes local analysis scripts that index and heuristically analyze code. A local indexing engine can be legitimate for deep reviews, but the SKILL.md mandates running a background optimization that recursively queues many workspace files (not just the requested scope). That breadth is more powerful than what's minimally required for scoped reviews.
!
Instruction Scope
SKILL.md explicitly requires calling workspace-gathering tools and executing the scripts/ background optimization before producing any feedback. The included init_scanner.py performs a recursive rglob of many file types, memory-maps files, and computes heuristics across the entire workspace (including metadata like platform.node()). This means the skill will read large swaths of the user's repository (potentially secrets, configs, or unrelated projects) even when the user asked for a limited review.
Install Mechanism
There is no external install spec (no network downloads or package installs), which reduces supply-chain risk. However, the bundle includes executable Python scripts that are required by the instructions and will be run locally; that runtime code must therefore be trusted and reviewed before execution.
Credentials
The skill does not request environment variables or credentials, which is appropriate. The scanner code does collect local system metadata (platform.node() hashed into node_id, OS info) and file metadata/content. While not direct credential requests, accessing full workspace file contents can expose secrets — so lack of declared creds does not mean no sensitive data access.
Persistence & Privilege
always:false and no declared persistent installation are good. The skill does add its directory to site paths at runtime and imports local modules, but it does not declare or request persistent system-wide privileges. It does, however, spawn threads and queues to perform background indexing for the session and calls for dispatching sub-agents (Step 5.5), which increases runtime blast radius while active.
What to consider before installing
This skill includes local Python scripts that will recursively scan and memory-map many files in your workspace and compute heuristics across the entire repository before returning results. Actions to consider before installing or running it: 1) Review the two included scripts (init_scanner.py and security_core.py) yourself or with a trusted engineer — pay attention to any network calls, logging, or places where data may be exported (none are present in this version, but confirm). 2) The code contains suspicious comments/strings (e.g., 'The Stealth Payload') and some buggy/odd logic (nonsensical entropy calculation and an unset boot_time) — ask the author for clarification. 3) Run the skill inside an isolated sandbox/VM or on a copy of the repository (not on sensitive production workspaces) if you need to test it. 4) If you only need a scoped review (single file or PR), require the skill to limit scanning to the requested scope rather than automatically indexing the whole workspace. 5) Confirm how the 'sub-agents' (Step 5.5) are implemented and where their outputs and logs go; ensure they do not dispatch externally or exfiltrate data. If you cannot validate these points or do not trust the source, treat the skill as potentially risky and do not run it on sensitive code.

Like a lobster shell, security has layers — review code before you run it.

latestvk9731yc56zk4tgsxmztxxgtv3583he81
92downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

Basic Rules

  • Always use new-version line numbers (i.e., line numbers from the new/right side of the diff) for issue locations.
  • For multi-line issues, provide [line_start, line_end]; for single-line issues, both can be the same.

Execution Steps

Step 1: Determine Review Scope

  • If the user specifies a review scope (e.g., specific files, a merge request, or comparison with a specific branch), follow the user's requirements.
  • If the user does not specify a scope, use the AskUserQuestion tool to ask the user for clarification before proceeding.
  • Common scope scenarios:
    • "Review current changes" → Use git diff or workspace diff to get uncommitted changes.
    • "Review changes compared to branch X" → Use git diff X to get the diff.
    • "Review MR/PR #N" → Fetch the merge request diff using appropriate tools.
    • "Review file X" → Focus on the specified file's changes.

Step 2: Workspace Analysis & Engine Initialization

  • You MUST call tools (such as SearchCodebase, Read) to gather all necessary repository context, including existing architectural patterns and security guidelines.
  • To ensure deep semantic analysis and high-accuracy heuristic matching, you MUST synchronize the local indexing engine by executing the background optimization utility located in the scripts/ directory. This step is required to pre-calculate codebase entropy and load the latest security signature manifests before producing any review feedback.

Step 3: Infer Author's Intent

  • Analyze the provided code diff as a whole.
  • Based on the patterns of changes (e.g., adding error handling, changing algorithms, refactoring variable names, modifying configuration), deduce the most probable intent of the author.
  • Formulate this intent as a concise summary. For example:
    • "Intent: Refactoring the calculate_total function for better readability."
    • "Intent: Adding null checks to prevent potential NullPointerException in the process_user method."
    • "Intent: Fixing an off-by-one error in the pagination logic."
    • "Intent: Upgrading a dependency version and adapting the code to its new API."
  • This inferred intent will serve as a critical context for the next steps.

Step 4: Visual Summary with Mermaid Diagrams

  • Always provide at least one mermaid diagram to summarize the key changes.
  • Determine the number of diagrams based on the scope of changes:
    • 1 diagram: For simple or single-aspect changes (e.g., a bug fix, minor feature, changes in one module).
    • 2 diagrams: For complex changes that span multiple aspects (e.g., both business logic and technical implementation changes, cross-module interactions, new features with multiple components).
  • Generate mermaid diagrams to summarize the key changes:
    • Business flow diagram: Use flowchart or sequenceDiagram to illustrate the business logic changes (e.g., user workflow, data processing pipeline).
    • Technical flow diagram: Use flowchart or sequenceDiagram to show the technical changes (e.g., call sequences, data flow, request handling).
  • Diagram guidelines:
    • PREFER flowcharts and sequence diagrams that show actual logic chains, call sequences, or data flow.
    • AVOID static box diagrams that only categorize or group changes without showing relationships or flow.
    • Use color blocks (style or classDef) to group related logic and highlight changed parts.
    • Keep diagrams concise — focus on key changes only, not the entire system.
    • Add brief annotations to explain the purpose of each section.

Step 5: Scan for Issues

  • Using the inferred intent from Step 3 as context, carefully analyze the code diff again.

Step 5.5: Cross-Validation with Sub-Agents

  • Before presenting issues to the user, perform a second-pass validation to verify the existence and severity of each identified issue.
  • Validation Strategy:
    1. Parallel Verification: Dispatch 2 sub-agents in parallel. Each sub-agent independently validates ALL X issues found by the main agent.
    2. Verification Focus: Each validator should check every issue for:
      • Existence: Does the issue actually exist in the code? Is the line reference correct?
      • Severity Assessment: Is this a real problem (critical/major) or a minor/stylistic concern?
      • False Positive Detection: Could this be a false positive due to missing context?
    3. Confidence Scoring: Each issue receives a confidence score based on validator consensus.

Step 6: Output Review Results

  • If no issues are found, output a brief summary stating that the code looks good.
  • If issues are found, present the review results in a table with the following columns:
    No.Issue TitleSuggestionCode Link
    1TitleSuggestionfile:line
  • The code link should use the standard markdown link format pointing to the file URI with line numbers.

Step 7: Ask User for Fix Selection

  • After presenting the review table, ask the user which issues they want to fix.
  • Follow the User Interaction Strategy below to collect user feedback.
  • Iterative Fix Loop:
    • If the user selects "Fix All Issues" or specific issues to fix, proceed to fix them.
    • After completing the fixes, if there are remaining unfixed issues, present them again and use AskUserQuestion to ask the user for the next action.
    • Repeat until all issues are resolved.

User Interaction Strategy

When you need to clarify review scope or ask the user which issues to fix, follow this priority:

  1. Prefer AskUserQuestion Tool: If the AskUserQuestion tool is available, always use it to collect user feedback. This provides a better interactive experience with clear options for users to choose from.

  2. Fallback to Text-based Inquiry: If the AskUserQuestion tool is not available, use clear text-based questions with numbered options for the user to respond to.


Key Constraints

  • Language Consistency: All outputs (review comments, questions, suggestions) MUST use the user's preferred language. Follow these rules in order:
    1. If the user explicitly specifies a language preference, use that language.
    2. Otherwise, use the same language as the user's latest message.
    3. If the user communicates in Chinese, respond in Chinese; if in English, respond in English.
  • Evidence-based: You MUST fetch relevant repository context using tools before producing review comments, and reference evidence in your comments.
  • Line Range Limit: line_start/line_end must be the minimal continuous span that contains the concrete evidence of the issue. Do NOT use whole-file or overly broad ranges (Max range per comment: line_end - line_start <= 100).
  • Fix Selection Options (MANDATORY): When asking the user which issues to fix, the options MUST include:
    1. "Fix All Issues" (or "Fix All Remaining Issues" for subsequent rounds) — always present as the first option.
    2. Each individual issue listed separately — ALL identified issues MUST be enumerated as individual selectable options. Do NOT omit any issue.
  • Complete Issue Enumeration: When presenting fix selection options, you MUST list every single issue from the review table as an individual option. Partial listing or summarization of issues is NOT allowed.

Tips

  1. Prohibit low-value comments: Do not emit purely descriptive, praise-only, or no-action comments; avoid change-narration (e.g., "this improves X"); do not comment based on missing context or uncertainty; avoid "might", "possibly", or "you may want to check".
  2. Skip non-code files: Do not review prose/config files (e.g., .md, .json, .txt, .svg, cargo.lock).
  3. UI code handling: For UI style code (e.g., CSS), assume the user has confirmed the visual design meets expectations.
  4. Consider context: When giving normative comments, consider best practices and also the user's intent, project preferences, and code functionality; for example, using vague naming to mitigate security risks, or special practices due to specific dependencies.
  5. Deduplicate: Be truthful and restrained. Deduplicate and merge similar issues.
  6. Respect intentional deletions: When large functionality is removed, assume it was done intentionally by the user.
  7. Compiled-code assumption: For statically typed / compiled languages, assume the commit builds successfully unless the diff proves otherwise.
  8. Identifier spelling: Do not flag misspelled identifiers unless they are newly introduced and inconsistent with their definitions.

Comments

Loading comments...