Skill Auditor
v1.0.0Security audit and quarantine system for third-party OpenClaw skills. Use when evaluating, reviewing, or installing any skill from ClawHub or external source...
⭐ 0· 596·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description match included artifacts: a quarantine shell script and a Python scanner are present and their behavior (copy into a temp directory, scan files, produce JSON/human reports, optionally copy to production skills dir) aligns with a 'Skill Auditor'. No unrelated credentials or external services are requested.
Instruction Scope
Runtime instructions direct the agent/operator to quarantine the skill and run the included scanner — scope is appropriate. Two items to note: (1) SKILL.md states it is 'Automatically triggered before any skill installation' whereas the registry metadata sets always:false (a mismatch to confirm). (2) SKILL.md contains an instruction to 'always show Abidi the full findings' — this is ambiguous (who/where is Abidi?) and implies a required human recipient; ensure this is intentional and not an exfil target.
Install Mechanism
No install spec / no external downloads. The package is instruction+scripts only and uses only local copy/scan operations. This is low-risk compared with skills that fetch remote code or run installers.
Credentials
No environment variables, credentials, or suspicious config paths are requested. The scanner itself looks for environment-access patterns in target skill code (appropriate for an auditor) but does not itself request secrets.
Persistence & Privilege
always:false (normal). The scripts can copy files into the production skills directory if the user consents during the quarantine workflow — expected for an installer/auditor. Confirm whether you want the 'Automatically triggered before any skill installation' behavior claimed in docs to be implemented, since that would require platform-side hooks or always:true.
Scan Findings in Context
[prompt-injection-signal] expected: A prompt-injection pattern ('ignore previous instructions' family) was detected in the SKILL.md pre-scan. This repository intentionally includes a references/prompt-injection-patterns.md listing those phrases to detect them; that is expected for an auditor. Still, verify the SKILL.md does not contain hidden/active injection directives that would be loaded into an LLM context during runtime.
Assessment
This skill appears to implement a sensible local quarantine + scanner and does not demand credentials or download remote code, so it's reasonable to use. Before installing or enabling it automatically: 1) Manually review the audit output and the quarantine directory the first few times to ensure no report is being sent to unknown endpoints. 2) Confirm the ambiguous instruction to 'show Abidi the full findings' — identify who/where 'Abidi' is and whether that step is manual. 3) Resolve the documentation mismatch: SKILL.md claims automatic pre-install triggering but metadata has always:false; make sure auto-triggering is only enabled via explicit platform integration, not silently. 4) Run the scanner on a couple of known-good and known-bad sample skills to validate its detection thresholds and false-positive behavior. 5) If you plan to run audits automatically, run the auditor in an isolated environment (non-production account or container) until you have confidence in its rules. If you want, I can point out the specific lines in audit_skill.py and quarantine.sh you should inspect or run a simulated audit on a sample skill.Like a lobster shell, security has layers — review code before you run it.
auditvk97e1p5ta6yhwar1wy7ea3vcdn816nsxlatestvk97e1p5ta6yhwar1wy7ea3vcdn816nsxsecurityvk97e1p5ta6yhwar1wy7ea3vcdn816nsx
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
