Skill Creator (Ming)
v1.0.0Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize a...
⭐ 0· 74·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name and description ('Skill Creator') match the included assets: SKILL.md describes creating/editing/evaluating skills and the repository includes scripts for packaging, running evaluations, generating reports, and an eval viewer. There are no unrelated required env vars, binaries, or external credentials that would be incoherent with the stated purpose.
Instruction Scope
SKILL.md instructs the agent to run evaluations, use bundled scripts (e.g., eval-viewer/generate_review.py), read skill files, transcripts, outputs and generate reports — all expected for a skill-authoring/eval workflow. Note: generate_review.py (and other agents' docs) explicitly read workspace directories and embed output files into an HTML page; running that script will enumerate and read files under the workspace (including binaries which are base64-embedded) and will write feedback.json. This behavior is consistent with the tool's purpose but means the script can expose any files present in the workspace if you host/serve the generated page.
Install Mechanism
No install spec or remote downloads are present; this is an instruction-and-scripts package that uses only local Python scripts and the stdlib. That is proportionate for the stated functionality and is lower risk than arbitrary remote installers.
Credentials
The skill declares no required environment variables, credentials, or config paths. The runtime instructions reference reading conversation history, local skill files, transcripts, and outputs — all appropriate for a skill-authoring/eval utility. There are no demands for unrelated secrets or cloud credentials.
Persistence & Privilege
always:false and no modifications to other skills or system-wide agent config are requested. The scripts write feedback.json and generate local HTML artifacts, which is normal for evaluation tooling. The skill does not request permanent platform-level privileges.
Assessment
This package appears coherent for creating and evaluating skills. Before running scripts (especially eval-viewer/generate_review.py):
- Inspect the workspace you point the tools at. The viewer will recursively read most files under the workspace and embed them into a self-contained HTML page (including binaries via base64). Don't point it at a directory containing secrets you don't want embedded/served.
- The viewer script attempts to free a chosen port by calling lsof and sending SIGTERM to process IDs it finds; on multi-user or production systems this may have side effects. Run in a dev/isolated environment rather than as root.
- Review the included scripts for any subprocess calls you are uncomfortable with; they currently use standard stdlib modules (subprocess, webbrowser, http.server) but may invoke local commands (e.g., lsof).
- If you plan to share the generated HTML, be aware it contains embedded data from the workspace; sanitize or remove sensitive files first.
If you want a deeper check, provide the full contents of the remaining omitted files (run_eval.py, run_loop.py, package_skill.py, etc.) and I can look for network calls, credential usage, or other unexpected behaviours.Like a lobster shell, security has layers — review code before you run it.
latestvk973vqa05538mks80ng0j91xrn83dk6s
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
