Guard

v1.0.0

Deep AI safety guardrails workflow—policy definition, input/output filtering, monitoring, escalation, and false-positive handling. Use when reducing harmful...

0· 242·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for clawkk/guard.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Guard" (clawkk/guard) from ClawHub.
Skill page: https://clawhub.ai/clawkk/guard
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install guard

ClawHub CLI

Package manager switcher

npx clawhub@latest install guard
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description claim a guardrails workflow and the SKILL.md provides a high-level six-stage process for policy, threat modeling, controls, implementation, monitoring, and appeals. No unrelated credentials, binaries, or install steps are requested—this is proportionate to a documentation-style skill.
Instruction Scope
Instructions are prescriptive but high-level (policy definition, classifiers, telemetry, dashboards, human review). The document does not instruct the agent to read local files, access environment variables, call external endpoints, or exfiltrate data. Mentions of telemetry and dashboards are architectural guidance, not implementation commands.
Install Mechanism
No install spec and no code files are present. Being instruction-only means nothing is downloaded or written to disk by the skill itself—this is the lowest-risk install posture.
Credentials
The skill declares no environment variables, credentials, or config paths. That matches the SKILL.md content (which only gives process guidance). There are no disproportionate or unexplained credential requests.
Persistence & Privilege
always is false and the skill is user-invocable with normal autonomous invocation allowed by default. There is no request for permanent presence or modifications to other skills or system settings. This is appropriate for a guidance-only skill.
Assessment
This skill is essentially a playbook — low-risk as shipped. Before relying on it in production, verify any concrete implementations you or the agent build from it: ensure telemetry/storage systems do not capture unnecessary PII, confirm retention and access controls for dashboards and logs, get legal/product owners to sign off on policy definitions and escalation paths, and avoid granting the agent or any implementation access to production secrets or connectors without separate review. If you plan to operationalize these recommendations (add classifiers, dashboards, or automated blockers), review the actual code, packages, and endpoints those implementations use—those are where most security and privacy risks arise.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fy6cgxjtwnhqbwwdk5efr0x83p8zw
242downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

AI Guardrails (Deep Workflow)

Guardrails turn product and legal policy into enforced behavior: blocking, rewriting, logging, and human review—with attention to false positives and latency.

When to Offer This Workflow

Trigger conditions:

  • Launching consumer-facing LLM features
  • Jailbreak attempts, policy violations, or PII leakage risks
  • Region-specific compliance (minors, regulated advice)

Initial offer:

Use six stages: (1) policy scope, (2) threat model, (3) controls stack, (4) implementation patterns, (5) monitoring & review, (6) iteration & appeals). Confirm latency budget and jurisdictions.


Stage 1: Policy Scope

Goal: Define prohibited categories (hate, sexual content, violence, self-harm, malware instructions, etc.) and required disclaimers for sensitive domains (medical, legal).

Exit condition: Policy document owned by legal/product; escalation path for gray areas.


Stage 2: Threat Model

Goal: Identify adversaries (prompt injection, data exfiltration, tool abuse) and assets (user data, system prompts, connectors).


Stage 3: Controls Stack

Goal: Layer defenses: input screening, model safety APIs, output classifiers, tool sandboxing, allowlists for tools and URLs.


Stage 4: Implementation Patterns

Goal: Structured refusal messages; telemetry on every block; distinguish block vs rewrite vs warn; avoid silent failures.


Stage 5: Monitoring & Review

Goal: Sample borderline cases for human review; dashboards on block rates by category; abuse spike alerts.


Stage 6: Iteration & Appeals

Goal: User appeals path where appropriate; version policy changes; measure false positives by locale and use case.


Final Review Checklist

  • Policy categories and owners defined
  • Threat model aligned with product
  • Layered controls with clear responsibilities
  • Telemetry and review for edge cases
  • Appeals and iteration process where applicable

Tips for Effective Guidance

  • Defense in depth—no single classifier is sufficient.
  • Pair with moderation for UGC and tool-calling for agent safety.

Handling Deviations

  • Enterprise internal bots: emphasize data-leak prevention and connector scope over public “safety” categories alone.

Comments

Loading comments...