Wurd
ReviewAudited by ClawScan on May 10, 2026.
Overview
Prompt-injection indicators were detected in the submitted artifacts (system-prompt-override); human review is required before treating this skill as clean.
This skill looks safe to install as documentation for Wurd. Before using the LLM features, protect your API key and understand that provider calls and cached outputs may involve your document content. Only run the compiler and external plugins from sources you trust. ClawScan detected prompt-injection indicators (system-prompt-override), so this skill requires review even though the model response was benign.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the command or loading external plugins can execute local Wurd project/plugin code while generating the HTML output.
The skill tells users how to run the Wurd compiler and optionally load plugins from a directory. Executing project/plugin code is expected for a plugin-based compiler, but it should be limited to trusted code.
npx tsx src/cli.ts <path-to-markdown> [--no-cache] [--pdf] [--plugins <dir>]
Run this only in a trusted Wurd project and review external plugins before using the --plugins option.
If enabled, the compiler can use your LLM provider account and consume quota for LLM-powered plugins.
The optional LLM-powered plugins require a provider API key and endpoint. This credential use is disclosed and tied to the stated graph/table generation features.
LLM_API_KEY=sk-... LLM_BASE_URL=https://api.anthropic.com/v1 LLM_MODEL=claude-opus-4-6
Use a dedicated or scoped API key if available, keep the .env file private, and avoid committing it to version control.
Generated LLM content may persist locally and affect later output if the cache is reused.
The skill discloses local persistence of LLM outputs. Cached generated content can remain on disk and be reused across later compilations until cleared.
LLM responses are cached in `.cache/llm/` — use `--no-cache` to regenerate.
Use --no-cache or clear .cache/llm/ when working with sensitive documents or when you want fresh LLM output.
