Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OpenClaw Guardian

v1.0.1

A security layer plugin for OpenClaw that intercepts dangerous tool calls (exec, write, edit) through two-tier regex blacklist rules and LLM-based intent ver...

0· 553·8 current·10 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the code and runtime instructions: it intercepts exec/write/edit calls, applies two-tier regex blacklists, and uses LLM-based voting for flagged operations. No unrelated services or credentials are requested.
Instruction Scope
The SKILL.md and code explicitly read recent conversation session files for context and send that context to model providers for intent checks. Reading user messages is necessary for its stated 'did the user ask for this?' function, but it's a sensitive action (conversation contents may include secrets). Audit logging to ~/.openclaw/guardian-audit.jsonl is also performed for blacklist hits.
Install Mechanism
No packaged installer is included (the README suggests cloning a GitHub repo or using openclaw plugins install). The skill bundle contains the source files, so there is no hidden download step in the provided package, but manual installation instructions point to an external GitHub repo (verify source/trust before cloning).
Credentials
The plugin does not declare extra env vars, but it auto-discovers the user's OpenClaw model providers and reads provider.baseUrl and provider.apiKey from OpenClaw config to call LLM endpoints. This is proportionate to the claimed LLM voting feature but means your existing model credentials and conversation context will be used/sent to those providers — review provider trust and config privacy settings.
Persistence & Privilege
It registers a before_tool_call hook (expected for a safety gate), does not set always: true, and does not modify other plugins. It writes an audit log to the user's home directory (normal for an audit trail).
Assessment
This plugin is internally consistent with its purpose, but before installing: (1) verify the plugin source (the README suggests a GitHub repo) and review the included code yourself or from a trusted reviewer; (2) understand that Guardian will read recent conversation session files and send them to whichever model provider is configured in OpenClaw (so ensure you trust that provider and that it’s configured to not leak sensitive data); (3) check and possibly restrict which provider/config entries it can use, and review the audit log location (~/.openclaw/guardian-audit.jsonl); (4) consider lowering automatic trust (trustBudget) or testing in a safe environment before enabling broadly. If you want, I can point out specific lines to review or summarize exactly which files/fields are sent to the LLM calls.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f4y2983h5s83n47ebz6s4n581vfs6

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments