Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Skill Review

v1.0.1

Security scanner for Claude Code Skill packages. Use when the user wants to audit, review, or check the safety of a Skill before installing — e.g. "is this s...

0· 55·0 current·0 all-time
byAnt AI Security Lab@antaisecuritylab
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (a security scanner for Skill packages) align with the code: it performs a deterministic pre-scan and then uses LLM-driven agents and registry lookups to do deeper analysis. Required environment variables (OPENAI_API_BASE/OPENAI_API_KEY) are appropriate for an LLM-powered scanner. However, the source imports @sinclair/typebox in src/tools.mjs but package.json does not declare that dependency — npm install may fail or behave unexpectedly unless that dependency is added. Overall capability is proportional to purpose, with a packaging inconsistency to fix.
!
Instruction Scope
The runtime instructs Agents to run a bash tool in the skill root and to use commands like `cat -n` to read files; those tool outputs are then included in prompts to remote LLMs. That means the full scanned skill source (including any hidden keys or secrets in files) will be transmitted to the configured LLM provider during LLM Analysis / Deep Analysis. The SKILL.md and embedded prompts also contain explicit system-prompt content (detected as a system-prompt-override pattern) — this is expected for an LLM-driven scanner but increases the attack surface if you scan untrusted content because a malicious skill could try to trick the explorer agent into executing commands. The code does include many pre-scan checks (ANSI escapes, invisible chars, prompt injection heuristics), but you should assume deep scans will send code to an external model and that the agent can execute shell tools in the target directory.
Install Mechanism
There is no special network download/install step in the skill bundle beyond normal npm install of the package. package.json lists only `@mariozechner/pi-agent-core` and `dotenv`, which is consistent with Node usage, but code imports additional packages (e.g., `@sinclair/typebox`) that are not declared — this is an inconsistency and will result in runtime errors or require you to add dependencies manually. No remote arbitrary archive downloads are present in the provided files.
!
Credentials
The skill requires an LLM API base and API key (OPENAI_API_BASE and OPENAI_API_KEY) to operate — that is proportionate to an LLM-based scanner. However, giving those credentials means the scanner will use your LLM account to process scanned code and metadata. Any code content read and passed to the agents (including file contents, extracted strings, and URLs) will be sent to the configured model provider, which may be sensitive. The skill does not request unrelated credentials or system-level secrets, but the fact that it exfiltrates scanned content to a third-party LLM is the main privacy/security consideration.
Persistence & Privilege
The skill is not marked always:true and does not request persistent system-level privileges. It runs ephemeral Agent instances that use the provided API key. disable-model-invocation remains false (normal), meaning the skill can run its own agents while invoked; combined with the LLM key this enables network calls but this is expected behaviour for the scanner. The skill does not modify other skills or global agent config in the inspected code.
Scan Findings in Context
[system-prompt-override] expected: The SKILL.md/prompts include explicit system-prompt content to configure the explorer/deep agents; a prompt-injection scanner flagged this pattern. This is expected for an LLM-driven scanner, but it's also exactly the kind of pattern a malicious skill could abuse if run by a less careful agent.
What to consider before installing
This skill appears to implement the scanner it advertises, but before installing or running it consider the following: - It requires an LLM API base and key (OPENAI_API_BASE and OPENAI_API_KEY). Running a scan will send scanned files and findings to that remote model — do not use your primary/high-privilege API key if you are scanning untrusted code containing secrets. - Use the `--pre` (pre-scan only) mode to run deterministic local checks without sending file contents to the LLM. - The repository's package.json does not list a package referenced in the code (@sinclair/typebox). Expect npm install to fail unless that dependency is added; inspect package.json and add any missing deps before running. - The scanner creates agents that can execute shell tools against the skill directory. That is necessary for content extraction, but it means a malicious skill might try to trick the agent into running harmful commands. Prefer running scans in an isolated environment (e.g., container or VM) and avoid `--deep` unless you trust the environment and know what tools the deep agent will execute. - Review src/tools.mjs (especially the bash tool implementation) to understand exactly which commands the agent can run in the scanned directory and whether it will execute any package lifecycle scripts automatically. If you only need a quick safety check without remote model exposure, run `node index.mjs --pre <skill-dir>` to use the deterministic pre-scan. If you plan to run LLM-driven analysis, use a dedicated/limited LLM API key or an internally-hosted model and run the scanner in an isolated sandbox.
src/prompts.mjs:23
Shell command execution detected (child_process).
src/tools.mjs:412
Shell command execution detected (child_process).
src/prompts.mjs:77
Dynamic code execution detected.
!
src/tools.mjs:429
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk973em59g5nmsvsa7tj7atyh9x84ks30

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments