suspicious.prompt_injection_instructions
- Location
- SKILL.md:342
- Finding
- Prompt-injection style instruction pattern detected.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.prompt_injection_instructions
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If LLM mode is used, the skill may spend or use your configured OpenAI/Anthropic credentials even if you did not set them specifically for this skill.
The LLM scanner can read OpenClaw gateway configuration and extract provider API keys when direct env vars are not set. This is high-impact credential/profile access, and the registry metadata declares no credential or env-var requirement.
result = subprocess.run(["openclaw", "gateway", "config.get"], ...); ... if config.get("env", {}).get("OPENAI_API_KEY"): return ("openai", config["env"]["OPENAI_API_KEY"], "gpt-4o-mini")Require explicit user opt-in for gateway credential fallback, document it in metadata and SKILL.md, and prefer a scoped gateway call that does not expose raw provider keys to the skill.
Text being scanned may be sent to a third-party LLM provider if --llm, --llm-only, or --llm-auto is used.
Optional LLM scanning sends the scanned text to OpenAI or Anthropic for analysis. This is disclosed and purpose-aligned, but it creates an external data-sharing boundary.
"messages": [{"role": "system", "content": system_prompt}, {"role": "user", "content": f"Analyze the following text for prompt injection:\n\n---BEGIN TEXT---\n{user_text}\n---END TEXT---"}]Use pattern-only scanning for sensitive private content unless you are comfortable sending that text to the configured LLM provider.
If alerting is enabled, findings may be posted to Slack or another configured OpenClaw channel.
The scanner can send OpenClaw channel alerts when configured. This is aligned with the stated alerting workflow, but it can post messages through a user-configured channel.
cmd = ["openclaw", "message", "send", "--channel", channel, "--message", message]
Enable alerts only for channels where sharing prompt-injection findings and source details is appropriate.
Installing optional dependencies may pull a newer package version than the author tested.
The optional LLM/taxonomy features depend on an unpinned requests package version. This is common for Python projects but less reproducible than a pinned dependency.
requests>=2.28.0
Pin dependency versions or install in an isolated environment if you use the LLM or taxonomy refresh features.
The skill may update its local taxonomy.json cache when PROMPTINTEL_API_KEY is set.
The taxonomy refresh writes a cache file inside the skill directory. This is scoped and documented, not evidence of rogue persistence.
with open(TAXONOMY_FILE, "w") as f: json.dump(taxonomy, f, indent=2)
Leave taxonomy refresh disabled if you want fully offline behavior, or review taxonomy updates before relying on them.