Agent Rule Audit

v1.0.0

Audit an OpenClaw agent's behavior-layer rules and prompt sources to find drift, redundancy, conflict, loss of focus, and weak behavior guidance. Use when re...

0· 127·0 current·0 all-time
byAlan Wang@gkso

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for gkso/agent-rule-audit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Rule Audit" (gkso/agent-rule-audit) from ClawHub.
Skill page: https://clawhub.ai/gkso/agent-rule-audit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agent-rule-audit

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-rule-audit
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description match the actual content: an instruction-only auditor for OpenClaw behavior-layer files. It requests no binaries, env vars, or installs — all proportional to an auditing guidance skill.
Instruction Scope
The SKILL.md directs the agent to read the workspace's core behavior files (AGENTS.md, SOUL.md, USER.md, etc.) and optionally widen scope when needed. This is appropriate for an audit, but it assumes the agent has access to the target workspace files or that the user will supply them. If those files contain sensitive data, the user should control which files are exposed or sanitize them first.
Install Mechanism
No install spec and no code files — lowest-risk pattern for skills. Nothing is downloaded or written to disk by the skill itself.
Credentials
The skill declares no required environment variables, credentials, or config paths. There are no extraneous secret requests inconsistent with the described auditing purpose.
Persistence & Privilege
always is false and the skill is user-invocable. disable-model-invocation is false (normal platform default) but this alone does not introduce extra risk given the skill's narrow scope and lack of credential access.
Assessment
This skill is instruction-only and internally consistent with an audit role: it doesn't ask for credentials or install anything. Before using it, be aware it will read the agent/workspace behavior files you point it at — if those files contain sensitive data, share only the specific files needed or sanitize them. Because the agent can be invoked autonomously by default, limit scope or require an explicit prompt if you want to avoid unattended audits. Overall the skill appears coherent and low-risk, but exercise standard caution about what workspace content you expose.

Like a lobster shell, security has layers — review code before you run it.

latestvk9717debnh4qfz22xn121e35js83scrp
127downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Agent Rule Audit

Audit the files that actually shape an OpenClaw agent's behavior. Focus on behavior-layer quality, not general file cleanup.

Quick start

  1. Identify the audit target: which agent/workspace is being reviewed.
  2. Read the core behavior-layer files first.
  3. Read shared rule files only when the core files explicitly depend on them.
  4. Stay with the default core scope first. Widen only when the core files are not enough to explain the behavior, or when the user asks for a deeper audit.
  5. Produce two outputs:
    • audit conclusions
    • executable restructuring recommendations
  6. Separate root causes from surface symptoms.

Default audit scope

By default, inspect only the agent's core, stable behavior files — the files most likely to be loaded every session and to consistently shape behavior.

Core behavior layer

Read these first when present:

  • AGENTS.md
  • SOUL.md
  • USER.md
  • MEMORY.md (if present)
  • TOOLS.md
  • IDENTITY.md
  • HEARTBEAT.md

See references/openclaw-behavior-sources.md for why these matter in OpenClaw.

Optional widening

Only widen beyond the core set when needed, for example:

  • the user explicitly asks for a broader audit
  • the core files explicitly depend on another file
  • the core files look fine but behavior clearly points to another steering source
  • a recent correction/example cannot be explained from the core files alone

Possible widening targets:

  • shared rule files explicitly referenced by the core files
  • any correction / learnings / workflow-improvement layer that exists in the target workspace
  • behavior-improvement or trial-related supporting files when they exist in the target workspace
  • recent behavioral evidence files when they exist in the target workspace
  • user-provided examples/screenshots/transcripts

What to look for

Use the problem categories in references/problem-types.md. Default categories:

  • structure confusion
  • repetition / redundancy
  • rule conflict
  • focus drift
  • behavior-layer dilution from too many weak rules
  • symptom-vs-root-cause confusion
  • style guidance overpowering execution guidance
  • stale or superseded rules not cleaned up
  • trial rules that never reached the live behavior layer

Audit workflow

1. Map the real behavior sources

Do not assume every file matters equally. First answer:

  • Which files are most likely shaping behavior now?
  • Which are direct behavior rules vs supporting evidence?
  • Which are probably ignored or low-weight?

2. Identify the user's real complaint

Do not let verbose files distract from the actual failure mode. Ask or infer:

  • What is the user truly unhappy with?
  • What is the root problem?
  • Which observed symptoms are secondary?

Example: “progress-sounding replies” may be a symptom; “not actually doing the work” may be the root issue.

3. Read for layering problems

Check whether files are cleanly separated by role:

  • identity/persona
  • working style
  • hard boundaries
  • task execution rules
  • temporary trial rules
  • business workflow rules

Flag when these are mixed together in ways that weaken the important rules.

4. Check alignment across files

Ask:

  • Do the core live rules point in the same direction, or are they pulling behavior apart?
  • Does the workspace's correction / learnings layer support or contradict the live rules?
  • If the widened scope includes behavior-improvement or trial files, do those files match the live rules?
  • Are older rules still pulling behavior in the wrong direction?

5. Judge whether the most important rule is actually prominent enough

The key audit question is not just “is the right rule written somewhere?” It is:

  • Is the right rule clear?
  • Is it near the top or buried?
  • Is it specific enough to change behavior?
  • Is it being diluted by too many softer surrounding rules?

6. Recommend by role, not by habit

Do not tell the user to rewrite everything. Recommend changes by file role:

  • what should stay in AGENTS.md
  • what should move to the workspace's correction / learnings layer
  • what should move to references/review/tracking
  • what should become a stronger core rule
  • what should be deleted or merged

Output structure

Use this default output shape:

  1. Audit scope — what was checked
  2. Overall judgment — is the behavior layer mostly aligned or not
  3. Highest-priority problems — ranked
  4. Root cause vs symptoms — where relevant
  5. What is already fine — avoid over-editing
  6. Recommended changes — concrete and file-specific

For a reusable outline, see references/output-template.md.

Important judgment rules

  • Do not confuse “a rule exists somewhere” with “the agent is actually being steered by it.”
  • Do not recommend giant rewrites when a smaller structural cleanup would solve the issue.
  • Prefer fewer, clearer, stronger rules over many overlapping weak ones.
  • When the user's complaint is concrete, optimize for that real complaint first.
  • If a problem is mainly workflow/process rather than prompt wording, say so plainly.

When to widen the audit

Widen beyond the default scope only when needed, for example:

  • a shared file is explicitly referenced
  • the user asks for a broader workspace audit
  • the core files look fine but behavior still points to another steering source
  • recent behavioral evidence contains the only concrete signs of how the behavior shifted

Comments

Loading comments...