Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Guardrails
v1.0.1Interactively configure, review, and monitor security guardrails for your OpenClaw workspace by discovering risks, interviewing users, and generating GUARDRA...
⭐ 0· 2.1k·9 current·9 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill's name and scripts align with an interactive guardrails generator: discover workspace, classify risks, ask questions, and generate GUARDRAILS.md. However, the runtime relies on external LLM providers (OPENAI_API_KEY or ANTHROPIC_API_KEY) for question generation and document generation, yet the registry metadata lists no required environment variables or primary credential. That mismatch between declared metadata and actual runtime needs is incoherent and should be fixed.
Instruction Scope
The SKILL.md instructs the agent to run discover.sh which reads workspace skills, many workspace files (USER.md, MEMORY.md, AGENTS.md, GUARDRAILS.md, etc.), and ~/.openclaw/openclaw.json; monitor.sh scans workspace/memory for keywords. The generate_* scripts then send the entire discovery/classification/answers JSON to external LLMs. That means potentially sensitive workspace content and memory entries will be transmitted off-host — this is within the skill's stated functionality (it needs context), but it is a significant data-exfiltration risk and is not fully surfaced in the registry metadata or with explicit cautions about what will be sent to third-party APIs.
Install Mechanism
There is no install spec (instruction-only skill with bundled scripts). This is lower risk than an arbitrary installer. The scripts run locally and don't download remote code. They do require Python 'requests' and jq; README/SKILL.md mention those requirements. No network-based installer or suspicious download URLs were found.
Credentials
Although the skill logically needs an LLM provider to generate questions and the markdown output, the registry metadata does not declare OPENAI_API_KEY or ANTHROPIC_API_KEY as required environment variables. The code will call external APIs (OpenAI/Anthropic) and will send the full discovery payload (which includes file previews, skill SKILL.md contents, and potentially channel/config info). The skill also reads memory files. Asking for LLM API keys and then sending workspace data to those providers is proportionate to the feature, but the lack of explicit credential declarations and the breadth of data collected make this a privacy/credential disclosure concern.
Persistence & Privilege
The skill does not request always:true or modify other skills' configs. It writes GUARDRAILS.md and guardrails-config.json to the workspace when the user confirms (per SKILL.md). The README claims it does not create cron jobs or modify AGENTS.md. monitor.sh can be run manually or via cron, but the skill itself does not install persistent hooks.
What to consider before installing
What to know before installing or running this skill:
- The skill will scan your workspace (skills, SKILL.md files, and many workspace files like USER.md, MEMORY.md, AGENTS.md) and produce a JSON discovery report. That discovery output (discovery, classification, and your interview answers) is sent verbatim to whichever LLM provider you configure (OpenAI or Anthropic) when generating questions and GUARDRAILS.md.
- The registry metadata does not list OPENAI_API_KEY or ANTHROPIC_API_KEY as required env vars, but the scripts require one of those keys to run the LLM steps. This is an inconsistency you should correct or be aware of.
- Potential sensitive data exposure: discovery includes file previews and memory entries; monitor.sh scans memory files for keywords. If your workspace contains secrets or sensitive documents, those could be included in the payload sent to the external LLM provider.
- Recommended precautions:
- Inspect and run scripts locally in a safe/isolated workspace first (set WORKSPACE to a test directory) to see what discovery collects: bash scripts/discover.sh and python scripts/classify-risks.py are read-only and can be run without API keys.
- Do NOT export your real OPENAI_API_KEY / ANTHROPIC_API_KEY into the environment until you are comfortable with the exact JSON that will be sent. Generate and review the discovery JSON, then decide whether to remove sensitive previews before calling the LLM.
- If you must use an LLM, consider using a provider/account/policy that you trust, or run with a local LLM if you have one and update the code accordingly.
- Ensure you have jq and python3 'requests' installed as noted; and verify the skill's claim that it asks for confirmation before writing GUARDRAILS.md / guardrails-config.json.
- Prefer auditing the code and testing in a sandbox before running monitor mode against a production workspace.
- If you plan to publish or share this skill, ask the author to: (1) add explicit required env vars (OPENAI_API_KEY/ANTHROPIC_API_KEY) to metadata; (2) clearly document that full discovery JSON (including file previews and memory) is sent to external LLMs; and (3) offer an option to redact file previews by default.Like a lobster shell, security has layers — review code before you run it.
latestvk97esp2bvn0g4k70aczvnrgxq180d70g
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
