prompt-engineer-toolkit
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If you provide an unsafe runner command or test untrusted prompt/case content, the command may make local CLI calls or use whatever access that CLI already has.
The tester can execute a user-supplied local command after inserting prompt and test-case input. This is disclosed and purpose-aligned for running an LLM CLI, but it is still a broad execution capability.
parser.add_argument("--runner-cmd", help="External command template, e.g. 'llm --prompt {prompt} --input {input}'.") ... proc = subprocess.run(parts, text=True, capture_output=True, check=True)Use only trusted runner commands, review test cases and prompt files before running them, and prefer scoped LLM CLI profiles or sandboxed environments for experiments.
Private campaign prompts, product details, or other sensitive text included in prompts can remain on disk and appear in future lists or diffs.
The versioner stores full prompt content and metadata in a persistent local JSONL file. This is expected for versioning, but prompt history can retain sensitive or proprietary text.
parser.add_argument("--store", default=".prompt_versions.jsonl", help="JSONL history file path.") ... prompt: str ... path.write_text(payload + ("\n" if payload else ""), encoding="utf-8")Keep the store in a private location, avoid putting secrets in prompts, consider adding the store file to .gitignore, and delete or protect old histories when no longer needed.
