Writing Skills
PassAudited by ClawScan on May 10, 2026.
Overview
This appears to be a legitimate guide for writing and testing skills, with some expected but noteworthy behavior around persistent skill files, persuasive testing prompts, and an optional local Graphviz helper.
This skill is reasonable to install if you want help authoring and testing agent skills. Before using it, remember that skill edits can affect future agent behavior, and keep any pressure-testing prompts confined to safe test environments. Only run render-graphs.js if you trust the skill directory and your local Graphviz installation.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If you run the helper, it executes a local binary and writes generated SVG files under the selected skill directory.
The helper script runs the local Graphviz `dot` binary when manually invoked. The command is fixed and aligned with rendering diagrams from skill documentation, but it is still local command execution.
const { execSync } = require('child_process'); ... return execSync('dot -Tsvg', { input: dotContent, encoding: 'utf-8', maxBuffer: 10 * 1024 * 1024 });Run the helper only on skill directories you trust, use a trusted Graphviz installation, and review generated SVG files before sharing them.
Poorly scoped or overly forceful skill edits could persistently steer future agents after the current task is over.
The skill is intended to create or edit persistent skill documents that future agents may load as context. This is expected for a skill-authoring guide, but persistent instructions can influence later agent behavior.
Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.agents/skills/` for Codex)
Review skill changes like configuration changes: keep them narrow, use version control or backups, and avoid broad instructions that could override future user intent.
If used outside a controlled test, similar prompts could pressure agents or subagents into rigid decisions without normal context checks.
The testing guidance recommends realistic pressure scenarios for subagents, including framing tests as real work. This is purpose-aligned for stress-testing skills, but it is a persuasive/deceptive testing technique that should be contained.
Make agent believe it's real work, not a quiz.
Use these prompts only in controlled testing with limited tool access, and avoid deceptive urgency or pressure framing in production-facing skills unless clearly justified and disclosed.
