Wurd

PassAudited by VirusTotal on May 10, 2026.

Overview

Type: OpenClaw Skill Name: wurd Version: 1.0.0 The 'wurd' skill is a document compiler tool for converting markdown into editorial HTML pages. The documentation (SKILL.md) provides standard instructions for CLI usage, plugin configuration, and LLM integration via environment variables, with no evidence of malicious intent, data exfiltration, or unauthorized execution patterns.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Running the command or loading external plugins can execute local Wurd project/plugin code while generating the HTML output.

Why it was flagged

The skill tells users how to run the Wurd compiler and optionally load plugins from a directory. Executing project/plugin code is expected for a plugin-based compiler, but it should be limited to trusted code.

Skill content
npx tsx src/cli.ts <path-to-markdown> [--no-cache] [--pdf] [--plugins <dir>]
Recommendation

Run this only in a trusted Wurd project and review external plugins before using the --plugins option.

What this means

If enabled, the compiler can use your LLM provider account and consume quota for LLM-powered plugins.

Why it was flagged

The optional LLM-powered plugins require a provider API key and endpoint. This credential use is disclosed and tied to the stated graph/table generation features.

Skill content
LLM_API_KEY=sk-...
LLM_BASE_URL=https://api.anthropic.com/v1
LLM_MODEL=claude-opus-4-6
Recommendation

Use a dedicated or scoped API key if available, keep the .env file private, and avoid committing it to version control.

What this means

Generated LLM content may persist locally and affect later output if the cache is reused.

Why it was flagged

The skill discloses local persistence of LLM outputs. Cached generated content can remain on disk and be reused across later compilations until cleared.

Skill content
LLM responses are cached in `.cache/llm/` — use `--no-cache` to regenerate.
Recommendation

Use --no-cache or clear .cache/llm/ when working with sensitive documents or when you want fresh LLM output.