Guard
v1.0.0Deep AI safety guardrails workflow—policy definition, input/output filtering, monitoring, escalation, and false-positive handling. Use when reducing harmful...
⭐ 0· 72·0 current·0 all-time
by@clawkk
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name and description claim a guardrails workflow and the SKILL.md provides a high-level six-stage process for policy, threat modeling, controls, implementation, monitoring, and appeals. No unrelated credentials, binaries, or install steps are requested—this is proportionate to a documentation-style skill.
Instruction Scope
Instructions are prescriptive but high-level (policy definition, classifiers, telemetry, dashboards, human review). The document does not instruct the agent to read local files, access environment variables, call external endpoints, or exfiltrate data. Mentions of telemetry and dashboards are architectural guidance, not implementation commands.
Install Mechanism
No install spec and no code files are present. Being instruction-only means nothing is downloaded or written to disk by the skill itself—this is the lowest-risk install posture.
Credentials
The skill declares no environment variables, credentials, or config paths. That matches the SKILL.md content (which only gives process guidance). There are no disproportionate or unexplained credential requests.
Persistence & Privilege
always is false and the skill is user-invocable with normal autonomous invocation allowed by default. There is no request for permanent presence or modifications to other skills or system settings. This is appropriate for a guidance-only skill.
Assessment
This skill is essentially a playbook — low-risk as shipped. Before relying on it in production, verify any concrete implementations you or the agent build from it: ensure telemetry/storage systems do not capture unnecessary PII, confirm retention and access controls for dashboards and logs, get legal/product owners to sign off on policy definitions and escalation paths, and avoid granting the agent or any implementation access to production secrets or connectors without separate review. If you plan to operationalize these recommendations (add classifiers, dashboards, or automated blockers), review the actual code, packages, and endpoints those implementations use—those are where most security and privacy risks arise.Like a lobster shell, security has layers — review code before you run it.
latestvk97fy6cgxjtwnhqbwwdk5efr0x83p8zw
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
