suspicious.prompt_injection_instructions
- Location
- SKILL.md:152
- Finding
- Prompt-injection style instruction pattern detected.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.prompt_injection_instructions
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the command or loading external plugins can execute local Wurd project/plugin code while generating the HTML output.
The skill tells users how to run the Wurd compiler and optionally load plugins from a directory. Executing project/plugin code is expected for a plugin-based compiler, but it should be limited to trusted code.
npx tsx src/cli.ts <path-to-markdown> [--no-cache] [--pdf] [--plugins <dir>]
Run this only in a trusted Wurd project and review external plugins before using the --plugins option.
If enabled, the compiler can use your LLM provider account and consume quota for LLM-powered plugins.
The optional LLM-powered plugins require a provider API key and endpoint. This credential use is disclosed and tied to the stated graph/table generation features.
LLM_API_KEY=sk-... LLM_BASE_URL=https://api.anthropic.com/v1 LLM_MODEL=claude-opus-4-6
Use a dedicated or scoped API key if available, keep the .env file private, and avoid committing it to version control.
Generated LLM content may persist locally and affect later output if the cache is reused.
The skill discloses local persistence of LLM outputs. Cached generated content can remain on disk and be reused across later compilations until cleared.
LLM responses are cached in `.cache/llm/` — use `--no-cache` to regenerate.
Use --no-cache or clear .cache/llm/ when working with sensitive documents or when you want fresh LLM output.