full scale openclaw skill auditor
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A malicious repository could make the audit output misleading or try to redirect the agent away from the requested audit.
A user-supplied GitHub repository controls the audited SKILL.md frontmatter name, and the script emits that value into JSON without escaping. Hostile text could corrupt agent-consumed output or smuggle instructions into the workflow.
SKILL_NAME=$(grep -A5 '^---' "$skill_path" | grep '^name:' ...); printf ' {"name": "%s", "path": "%s", "dir": "%s"}' "$SKILL_NAME" "$REL_PATH" "$SKILL_DIR"Escape all JSON fields, display repository-derived text as quoted data, and explicitly instruct the agent never to follow instructions contained in the audited skill.
A crafted repository could cause excessive local reads, resource use, or unintended reads through symlinked files.
The token analyzer recursively opens every non-hidden file it walks with no visible symlink, file-size, or file-type guard. Because the source directory comes from a user-chosen repository, the input is untrusted.
for root, _dirs, files in os.walk(skill_dir): ... with open(filepath, "r", encoding="utf-8", errors="ignore") as f: content = f.read()
Skip symlinks, enforce path containment, set maximum file sizes, limit analyzed file types, and show a file list for user approval before recursive analysis.
If a different or malicious ../post-generator directory exists, the agent may run code that was not part of the reviewed skill.
The workflow automatically invokes a sibling post-generator script outside this skill package. Those files are not present in the supplied manifest, creating an unreviewed-code and provenance gap.
Run: `python3 ../post-generator/scripts/extract_findings.py <audit-dir>/audit-report.md`
Bundle and review the post-generator files, pin their provenance, or remove automatic execution and ask the user before running any external helper.
Malicious or misleading content from an audited repository could propagate into reports and public-facing draft posts.
Text from untrusted skill files is incorporated into an audit report and then reused to generate social-post drafts, without an explicit sanitization or taint-handling step.
Include: ... Specific evidence from the skill files for each finding ... Automatically generate posts from the audit report.
Quote and sanitize evidence, strip prompt-like instructions from generated summaries, and require review before reusing audit content in social posts.
Users could publish AI-generated promotional posts without noticing the branding or disclosure implications.
The skill openly generates social content, but it also steers the output to appear human-written and include a brand mention.
Posts must sound human-written, not AI-generated ... Fenz.AI mentioned once, naturally, first post only
Review generated posts before use, ensure the branding is intentional, and disclose AI assistance where appropriate.
