Input Guard

AdvisoryAudited by Static analysis on May 10, 2026.

Overview

Detected: suspicious.prompt_injection_instructions

Findings (1)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

ConcernMedium Confidence
ASI03: Identity and Privilege Abuse
What this means

If LLM mode is used, the skill may spend or use your configured OpenAI/Anthropic credentials even if you did not set them specifically for this skill.

Why it was flagged

The LLM scanner can read OpenClaw gateway configuration and extract provider API keys when direct env vars are not set. This is high-impact credential/profile access, and the registry metadata declares no credential or env-var requirement.

Skill content
result = subprocess.run(["openclaw", "gateway", "config.get"], ...); ... if config.get("env", {}).get("OPENAI_API_KEY"): return ("openai", config["env"]["OPENAI_API_KEY"], "gpt-4o-mini")
Recommendation

Require explicit user opt-in for gateway credential fallback, document it in metadata and SKILL.md, and prefer a scoped gateway call that does not expose raw provider keys to the skill.

What this means

Text being scanned may be sent to a third-party LLM provider if --llm, --llm-only, or --llm-auto is used.

Why it was flagged

Optional LLM scanning sends the scanned text to OpenAI or Anthropic for analysis. This is disclosed and purpose-aligned, but it creates an external data-sharing boundary.

Skill content
"messages": [{"role": "system", "content": system_prompt}, {"role": "user", "content": f"Analyze the following text for prompt injection:\n\n---BEGIN TEXT---\n{user_text}\n---END TEXT---"}]
Recommendation

Use pattern-only scanning for sensitive private content unless you are comfortable sending that text to the configured LLM provider.

What this means

If alerting is enabled, findings may be posted to Slack or another configured OpenClaw channel.

Why it was flagged

The scanner can send OpenClaw channel alerts when configured. This is aligned with the stated alerting workflow, but it can post messages through a user-configured channel.

Skill content
cmd = ["openclaw", "message", "send", "--channel", channel, "--message", message]
Recommendation

Enable alerts only for channels where sharing prompt-injection findings and source details is appropriate.

What this means

Installing optional dependencies may pull a newer package version than the author tested.

Why it was flagged

The optional LLM/taxonomy features depend on an unpinned requests package version. This is common for Python projects but less reproducible than a pinned dependency.

Skill content
requests>=2.28.0
Recommendation

Pin dependency versions or install in an isolated environment if you use the LLM or taxonomy refresh features.

NoteHigh Confidence
ASI10: Rogue Agents
What this means

The skill may update its local taxonomy.json cache when PROMPTINTEL_API_KEY is set.

Why it was flagged

The taxonomy refresh writes a cache file inside the skill directory. This is scoped and documented, not evidence of rogue persistence.

Skill content
with open(TAXONOMY_FILE, "w") as f: json.dump(taxonomy, f, indent=2)
Recommendation

Leave taxonomy refresh disabled if you want fully offline behavior, or review taxonomy updates before relying on them.

Findings (1)

warn

suspicious.prompt_injection_instructions

Location
SKILL.md:342
Finding
Prompt-injection style instruction pattern detected.