Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
LLMWiki
v0.1.1LLM-powered personal knowledge base. Ingest raw documents, compile into a structured interlinked wiki, query with deep research, self-heal. Works for any dom...
⭐ 0· 12·0 current·0 all-time
by@hosuke
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The SKILL.md describes a local LLM-backed personal wiki (ingest, compile, query, self-heal) and requests an LLM API key and local filesystem/network access — which are appropriate for the stated purpose. However, the registry metadata above this file lists no required environment variables or install steps while SKILL.md declares a required LLMBASE_API_KEY and an install command ('pip install llmwiki'). That inconsistency between declared registry requirements and the skill's own instructions is unexpected.
Instruction Scope
The runtime instructions stay within the expected scope: ingesting URLs or local files, reading/writing markdown under raw/ and wiki/, calling an LLM API, and optionally starting a local web or MCP server. The SKILL.md does not instruct the agent to read unrelated system files or unrelated credentials. It does describe automated background workers (if enabled) that will fetch URLs and modify the KB — this is intended behavior but worth noting because it grants the skill the ability to perform periodic network fetches and filesystem writes when enabled.
Install Mechanism
There is no install spec in the registry (skill is instruction-only), but SKILL.md recommends 'pip install llmwiki'. As-distributed here there are no code files to inspect and the regex scanner had nothing to analyze. Installing the package via pip (from PyPI or elsewhere) is a normal deployment path, but because no install artifact is bundled in the skill, the install step is an out-of-band action the user/agent would perform — you should review the pip package source (GitHub repo) before running it.
Credentials
The SKILL.md requires an LLM API key (LLMBASE_API_KEY) plus optional base URL and model name — these are proportionate to using an external LLM. However, the top-level registry metadata reported 'Required env vars: none', which contradicts the SKILL.md. The skill also asks to store the API key in a local .env file; that is common but increases the risk of accidental credential leakage (e.g., committing .env to git). The skill requests network and filesystem permissions (reasonable), and exposes optional web (localhost:5555), agent API (localhost:5556), and MCP/stdio features which could surface KB contents to other processes if enabled — verify those before turning them on.
Persistence & Privilege
The skill does not request 'always: true' and uses the platform default allowing autonomous invocation. It documents an optional background worker that can autonomously ingest sources and update the KB if enabled in config — this is expected for a 'set-and-forget' mode but increases the blast radius when combined with network access and an LLM API key. There's no indication the skill tries to modify other skills or system-wide agent settings.
What to consider before installing
This skill appears to do what it says (a local LLM-backed personal wiki), but take these precautions before installing or enabling features: 1) The SKILL.md requires an LLM API key (LLMBASE_API_KEY) and suggests 'pip install llmwiki' though the registry metadata omitted those — treat the SKILL.md as authoritative and review the pip package source on GitHub before installing. 2) Keep your API key out of version control (add .env to .gitignore) or use a short-lived key. 3) Only enable the web UI, agent API, or MCP server if you trust the network environment; verify the server is bound to localhost and not exposed to the internet. 4) Be cautious enabling the autonomous worker: it will fetch URLs and write files on a schedule, which is convenient but can cause unexpected network access or data changes. 5) Run the package in an isolated directory or environment (and consider a container) so filesystem writes are confined. 6) If you rely on SSRF protection or other security claims, confirm these protections by inspecting the code or testing in a safe environment. If you want higher confidence, provide the pip package metadata or the package files for a more detailed review.Like a lobster shell, security has layers — review code before you run it.
latestvk97cfm39me5jjxwr52y4b33fwn84bv7t
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
