现象描述:校验skill是否合格

v1.0.0

Audit another Codex skill for structural compliance, trigger quality, instruction clarity, reuse of scripts or references, and overall maintainability. Use w...

0· 88·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for aidenchangzy/skill-quality-auditor-new.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "现象描述:校验skill是否合格" (aidenchangzy/skill-quality-auditor-new) from ClawHub.
Skill page: https://clawhub.ai/aidenchangzy/skill-quality-auditor-new
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install aidenchangzy/skill-quality-auditor-new

ClawHub CLI

Package manager switcher

npx clawhub@latest install skill-quality-auditor-new
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (skill-quality-auditor) match the included functionality: a deterministic local audit script plus rubric and guidance. The skill does not request unrelated binaries, credentials, or config paths.
Instruction Scope
SKILL.md explicitly tells the agent to run the bundled script (scripts/evaluate_skill.py) against a target skill folder and to inspect files referenced by the script; the script only reads files inside the target skill folder (SKILL.md, scripts/, references/, agents/openai.yaml) and does not attempt network calls or read unrelated system files.
Install Mechanism
No install spec is present (instruction-only plus a bundled, dependency-free Python script). Nothing is downloaded or extracted from external URLs; the script is included and runs locally.
Credentials
The skill declares no required environment variables, credentials, or special config paths, and the script does not read environment secrets or contact external endpoints.
Persistence & Privilege
always:false and no requested system modifications. The skill can be invoked autonomously (disable-model-invocation:false) which is the platform default; this is not a concern by itself but means an agent could run the auditor on any supplied folder if given access.
Assessment
This skill appears to do what it says: run the included scripts/evaluate_skill.py to audit a target skill folder using the bundled rubric. The bundled script is dependency-free and works by reading files inside the target skill directory only. Minor thing to note: the frontmatter name in SKILL.md is 'skill-quality-auditor' while the registry slug is 'skill-quality-auditor-new' — the auditor script flags name/folder-name mismatches, so either rename the folder or align the frontmatter to avoid a structural penalty. Before allowing an agent to run this auditor autonomously on untrusted skill folders, remember it will read any files inside the target folder (SKILL.md, scripts/, references/, agents/...), so only point it at content you trust or sandbox the input. Otherwise, there are no extra credentials, network downloads, or hidden behaviors to be concerned about.

Like a lobster shell, security has layers — review code before you run it.

latestvk972hdd136qr58z8deszfndg8184qc0v
88downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Skill Quality Auditor

Overview

Evaluate a target skill with a consistent rubric and return a clear pass/fail-style verdict plus a multi-dimensional review. Prefer the bundled script for the first pass, then turn the raw findings into a concise human-readable assessment.

Workflow

  1. Identify the target skill folder.
  2. Run scripts/evaluate_skill.py <path-to-skill>.
  3. Read the report and group findings into:
    • final verdict
    • strengths
    • weaknesses
    • critical blockers
    • recommended fixes
  4. If the script reports missing context or borderline results, inspect the target skill's SKILL.md and any referenced resources before writing the final judgment.
  5. Keep the final answer decisive: say whether the skill is currently qualified, conditionally qualified, or not qualified.

Rubric

Score the skill across these dimensions:

  • structure: required files, frontmatter validity, naming, obvious TODO placeholders
  • triggering: whether description clearly explains what the skill does and when to use it
  • workflow: whether the body gives actionable steps instead of vague guidance
  • progressive_disclosure: whether detailed material is kept in scripts or references instead of bloating SKILL.md
  • resources: whether scripts, references, and assets are included only when useful and are mentioned in the body
  • examples_and_outputs: whether the skill helps the agent understand expected usage or output shape
  • maintainability: clarity, concision, stale metadata checks, and overall ease of iteration

Use references/rubric.md when you need the detailed scoring logic and interpretation rules.

Verdict Rules

Use these labels:

  • Qualified: no critical blockers and score is strong enough for immediate use
  • Borderline: usable but needs material fixes soon
  • Not Qualified: missing required structure or too weak to trust in repeated use

Treat these as critical blockers:

  • missing SKILL.md
  • invalid or missing YAML frontmatter
  • missing name or description
  • unresolved template placeholders such as TODO
  • description too weak to trigger reliably
  • instructions too incomplete to execute the core task safely

Output Shape

Prefer this response shape:

Verdict

State Qualified, Borderline, or Not Qualified in the first sentence and explain the main reason.

Score Summary

Include the total score and 3-5 highest-signal dimension notes.

What Works Well

List concrete strengths tied to files or sections.

What Needs Work

List concrete weaknesses tied to files or sections.

Next Fixes

List the smallest set of changes most likely to move the skill to Qualified.

Script

Run:

python3 scripts/evaluate_skill.py /absolute/path/to/skill

Optional JSON mode:

python3 scripts/evaluate_skill.py /absolute/path/to/skill --json

The script is dependency-free and performs a deterministic first-pass audit. It is intentionally conservative: if a skill barely explains its trigger conditions or still contains template leftovers, the script should flag it instead of assuming good intent.

Review Rules

  • Prefer evidence over taste.
  • Praise strengths explicitly; do not only list problems.
  • Distinguish hard failures from improvement opportunities.
  • If the target skill intentionally omits scripts, references, or agents metadata, do not penalize that by itself.
  • Penalize unused or stale directories when they add confusion.
  • When inferring quality from wording, cite the exact section or file that led to the conclusion.

Trigger Examples

  • "Check whether this skill is规范合格."
  • "Review this skill and tell me if it passes."
  • "Audit this skill folder and summarize the good and bad."
  • "Evaluate this skill against best practices and give me a verdict."

Comments

Loading comments...