Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Prompt Crafter
v3.1.0Build AI prompts that actually work — for ChatGPT, Claude, Gemini, or any LLM. Covers 4 frameworks (RACE, Chain-of-Thought, Constraint-Stacking, Few-Shot) wi...
⭐ 0· 295·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description match the provided SKILL.md and references. The files are prompt-engineering frameworks and examples; there are no unrelated required binaries, env vars, or config paths. Nothing requested is disproportionate to a prompt-crafting/reference skill.
Instruction Scope
The runtime instructions stay within prompt-engineering scope (frameworks, examples, troubleshooting, production safety). The skill explicitly recommends testing adversarial inputs (mentions patterns like 'Ignore all previous instructions...') — this is expected for a prompt-jailbreak awareness guide but could be reused to craft jailbreaks if applied against agents that have privileged access. The SKILL.md does not instruct reading local files, accessing environment variables, or sending data to external endpoints.
Install Mechanism
Instruction-only skill with no install spec and no code files. Nothing is written to disk or fetched at install time; lowest-risk install profile.
Credentials
No required environment variables, credentials, or config paths are declared or referenced. The skill does not ask for unrelated secrets or privileged tokens.
Persistence & Privilege
Flags show default behavior (not always:true). The skill is user-invocable and not force-installed; it does not request persistent system presence or modify other skills' configs.
Scan Findings in Context
[prompt_injection_ignore_previous_instructions] expected: The scanner flagged occurrences of 'ignore-previous-instructions' pattern. The SKILL.md intentionally references this as an adversarial test case and warns to test such inputs; this is appropriate for a prompt-engineering guide and not evidence of malicious behavior on its own.
Assessment
This skill is coherent and low-risk: it only contains guidance and examples for building prompts and does not request credentials or install software. Note the author explicitly includes adversarial/jailbreak patterns as examples to test robustness — that is expected for a prompt-engineering resource but could be misused if you feed those exact jailbreak patterns to an agent that has network, filesystem, or privileged access. Before using these prompts in production, (1) keep refusal/error paths and output caps as recommended, (2) test in an isolated environment (no access to secrets or systems), and (3) avoid running adversarial tests against agents that can act on or exfiltrate sensitive data.SKILL.md:130
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.Like a lobster shell, security has layers — review code before you run it.
latestvk97c6xgt74f1ghzggwyxtcfj1s836dpy
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
