ClawRAG - Self-hosted RAG & Memory
PassAudited by ClawScan on May 10, 2026.
Overview
This looks like a coherent self-hosted RAG connector, but it requires running external Docker/npm components and creates persistent local document memory.
Install only if you trust the referenced GitHub repository and npm MCP package. Before using it with sensitive documents, inspect the Docker Compose setup, decide whether you want local-only or cloud LLM operation, use scoped collections, and know how to stop the background containers.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing it means trusting code and container definitions fetched from outside the reviewed artifact.
The setup depends on external GitHub and npm artifacts that are not pinned or included in the reviewed skill package.
git clone https://github.com/2dogsandanerd/ClawRag.git ... docker compose up -d ... npx -y @clawrag/mcp-server
Review the GitHub repo and npm package before use, consider pinning versions, and avoid running it on sensitive systems until you trust the upstream sources.
The local RAG service may continue using resources and serving indexed data until stopped.
The documented setup starts background Docker services in detached mode, which is expected for a local RAG server but remains running after setup.
docker compose up -d
Know how to stop or remove the Docker Compose stack, and check what local ports and volumes it uses.
Sensitive, outdated, or malicious documents added to the collection could later influence answers or be surfaced in search results.
The skill is designed to ingest documents into a RAG/vector database, creating reusable memory for future agent queries.
Document Upload | PDF, DOCX, TXT, MD via API or folder
Use scoped collections, ingest only intended documents, periodically review stored content, and remove sensitive or untrusted files.
Once connected, the agent may be able to query or interact with the local RAG service according to the MCP server's tools.
The skill bridges OpenClaw to a local HTTP RAG API through an MCP server, but the artifact does not describe detailed access controls for that bridge.
OpenClaw ◄──MCP──► @clawrag/mcp-server ◄──HTTP──► ClawRAG API (localhost:8080)
Only add the MCP server in trusted OpenClaw environments and review the MCP server documentation for exposed tools and permissions.
If configured for a cloud LLM, provider API keys and possibly retrieved document context may be used outside the local machine.
The artifact says cloud LLM credentials may be used, although registry metadata declares no required environment variables or primary credential.
Or: OpenAI/Anthropic API key for cloud LLM
Prefer a local LLM if local-only privacy is required, and use limited, dedicated API keys if cloud providers are enabled.
