Back to skill
v1.0.0

Skill Creator

BenignClawScan verdict for this skill. Analyzed May 1, 2026, 7:18 AM.

Analysis

The skill appears purpose-aligned for creating and testing other skills, but users should supervise its local file changes, background evaluations, and any external helper scripts it references.

GuidanceThis looks like a legitimate skill-building helper. Before installing or using it, verify any external evaluation tools it references, run its scripts only in an intended skills workspace, review generated or modified skills before enabling them, and do not include secrets in test prompts or packaged skill folders.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Tool Misuse and Exploitation
SeverityLowConfidenceHighStatusNote
scripts/init_skill.py
skill_dir = Path(path).resolve() / skill_name ... skill_md_path.write_text(skill_content) ... example_script.chmod(0o755)

The helper creates files and an executable example script at a user-supplied path. This is expected for initializing a new skill, but it gives the workflow local file-write authority that should be pointed only at an intended skill workspace.

User impactIf run with the wrong path or skill name, it could create skill files in an unintended local location.
RecommendationRun the initializer only in a dedicated skills directory, check the resolved path before proceeding, and review generated files before installing or sharing them.
Agentic Supply Chain Vulnerabilities
SeverityLowConfidenceMediumStatusNote
SKILL.md
Use the `eval-viewer/generate_review.py` script to show the user the results

SKILL.md references an evaluation helper script that is not included in the provided file manifest. That may be a normal local tool dependency, but its provenance is outside the supplied artifacts.

User impactIf the agent runs that external helper, its behavior depends on local code that was not part of this reviewed package.
RecommendationVerify the source and contents of any referenced local evaluation tools before allowing the agent to run them.
Agent Goal Hijack
SeverityLowConfidenceHighStatusNote
SKILL.md
make the skill descriptions a little bit "pushy" ... use this skill whenever the user mentions dashboards ... even if they don't explicitly ask

The skill encourages broader trigger language for generated skills. This is disclosed and related to improving skill triggering, but overly broad descriptions can make future skills activate more often than the user expects.

User impactGenerated skills may be written to trigger in broad contexts, which can influence future agent behavior.
RecommendationKeep generated skill descriptions specific enough that the skill activates only when it is genuinely relevant.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Insecure Inter-Agent Communication
SeverityLowConfidenceHighStatusNote
SKILL.md
run claude-with-access-to-the-skill on them ... While the runs happen in the background

The evaluation workflow can send test prompts and skill content through background agent runs. This is purpose-aligned for benchmarking a skill, but users should be aware of what data is included in those prompts.

User impactSensitive examples or private documents used as test prompts could be processed by background evaluation agents.
RecommendationAvoid putting secrets or private data in evaluation prompts unless you intend them to be used in those background runs.