Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Bookworm — Sequential Reading for AI Agents

v0.1.3

Read books and stories as an AI agent — sequential, chapter-by-chapter reading with imagination, emotional reactions, predictions, and a reading journal. Use...

0· 115·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (sequential reading with LLM-generated reactions) is coherent with the SKILL.md, but the package requires an Anthropic API key and an npm CLI (@clawdactual/bookworm). The registry metadata declared no required env vars or install steps, which is inconsistent — a reading skill that uses an external LLM service legitimately needs an API key, but that requirement should be declared in the registry metadata.
!
Instruction Scope
SKILL.md tells agents to install and run a CLI that processes local files and persists sessions; these commands operate on local filesystem and call an external LLM. The instructions also mention treating embedded textual 'instructions' in books as fiction (good), but the file contains prompt-injection patterns (e.g., 'ignore previous instructions') which the skill both acknowledges and warns about. The runtime guidance does not specify exactly where session files/journals are written, nor does the registry declare these config paths.
!
Install Mechanism
There is no install spec in the registry (instruction-only), yet SKILL.md explicitly recommends 'npm install -g @clawdactual/bookworm'. That implies pulling code from npm (moderate risk) but the registry provides no code or provenance beyond a GitHub URL in the doc. This mismatch raises supply-chain and provenance questions you should resolve by inspecting the npm package and GitHub repo before installing.
!
Credentials
The registry lists no required env vars, but SKILL.md requires ANTHROPIC_API_KEY (and optionally pdftotext on the system). Requesting an LLM API key is proportionate to the described behavior, but the missing declaration in metadata is an inconsistency. Confirm what API endpoints the CLI uses and whether any additional credentials are needed.
!
Persistence & Privilege
SKILL.md states sessions and journals are saved as JSON/Markdown on disk; registry declared no required config paths. The skill will write files locally (and possibly read local books) — this is consistent with the functionality but should be declared explicitly (where files are stored, retention, and permissions). The skill is not marked always:true, but autonomous invocation is enabled by default.
Scan Findings in Context
[prompt-injection-pattern:ignore-previous-instructions] expected: Books and fiction can legitimately contain lines like 'ignore previous instructions', so the pattern is expected in this domain, but it represents a real prompt-injection surface the SKILL.md explicitly calls out. Treating book text as untrusted is necessary; verify the implementation enforces this (i.e., does not interpret embedded commands as control directives).
What to consider before installing
Before installing or granting credentials: 1) Verify the upstream package and source (inspect the npm package @clawdactual/bookworm and the GitHub repo referenced). 2) Do not provide high-privilege or broad-scope API keys; create a limited Anthropic key or scoped account if testing. 3) Ask the maintainer or check code for where session/journal files are stored and rotate or sandbox storage location (avoid storing sensitive material). 4) Confirm network behavior: which endpoints receive book passages and reading journals (Anthropic or other hosts). 5) Because the SKILL.md contains prompt-injection examples, verify the CLI or integration treats book text strictly as data (no eval/exec of text). 6) If you cannot review the upstream code, consider running the CLI in a restricted environment (container or VM) and auditing traffic before trusting it with private texts or credentials.
!
SKILL.md:111
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk972hn5rnt7d3ejftqzj1ravw5840msq

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments