Skill Creator Claude

v1.0.0

Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize...

0· 98·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (create, evaluate, and improve skills) align with the included agents, eval/grader/comparator docs, and utility scripts (generate_review.py, run_eval.py, aggregate_benchmark.py, etc.). No unexpected cloud credentials, binaries, or unrelated dependencies are requested.
Instruction Scope
SKILL.md instructs agents to author skills, create tests, run evaluations, and use the provided scripts (e.g., eval-viewer/generate_review.py) to present results. That involves reading workspace files, embedding outputs into an HTML review, and launching a small local HTTP server and browser. These actions are in scope for a skill testing/inspection tool, but they grant the skill access to all files in the provided workspace and expose them via a local web UI — consider what is stored in the workspace (secrets, tokens, private data).
Install Mechanism
No install spec; this is instruction- and repo-file-based. All code is bundled in the skill (Python scripts, HTML assets). There are no downloads from external URLs during install. This minimizes supply-chain risk compared with remote installers.
Credentials
The skill declares no required env vars, no credentials, and no special config paths. The scripts operate on workspace files and use only standard stdlib modules (subprocess, http.server, etc.). There is no disproportionate credential request.
!
Persistence & Privilege
The skill does not request persistent inclusion (always: false) and doesn't modify other skills, which is good. However, generate_review.py includes an internal _kill_port routine that runs lsof and may os.kill() PIDs listening on a port to free it before starting the local server; that can terminate unrelated processes. The scripts also start a local HTTP server, open a browser, and write feedback.json into the workspace. These behaviors are plausible for a local review tool but have side effects on the host that users should be aware of.
Assessment
This skill appears to do what it claims: authorship, evaluation, and iterative improvement of skills using bundled scripts and HTML viewers. It does not request cloud keys or other credentials, and there is no remote installer. Before installing/running: 1) inspect the repository (already included) and search for any sensitive file paths it might read; 2) avoid running it in a workspace that contains secrets or private keys because the review tool embeds and serves workspace files; 3) be aware generate_review.py attempts to free a port by running lsof and killing processes — run it in a sandbox/container or ensure the chosen port is safe to free; 4) run the scripts as an ordinary (non-root) user; 5) if you need extra caution, run the tool inside an isolated environment (container, VM) or review/modify the _kill_port behavior to be safer. If you want, I can point to the exact lines in generate_review.py that perform the port-kill and file-embedding so you can review or patch them.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f5347zh0v614a1w9qbhpz8h84h0rc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments