LightRAG Memory

v1.0.1

LightRAG-based semantic memory system for AI agents. Provides efficient long-term knowledge storage and retrieval using vector embeddings and knowledge graph...

1· 71·0 current·0 all-time
byGorikon@ogi98rus
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, required binaries (python3, pip), required env var (OPENAI_API_KEY), and included files (requirements.txt, scripts/rag.py) all align with a semantic memory/embedding tool that calls an OpenAI-compatible API.
Instruction Scope
Instructions and the script operate on MEMORY.md and memory/*.md by default and store data under ~/.openclaw/workspace/lightrag_storage; the index command accepts a workspace path so a user could point it at any directory (and thus index other files) if they run it with a custom path. The code loads an optional .env from the skill directory and uses only OPENAI_API_KEY/OPENAI_BASE_URL for remote calls.
Install Mechanism
No installer that downloads arbitrary archives is present; install is via pip -r requirements.txt which will pull PyPI packages (including an external package named 'lightrag-hku'). Installing pip packages executes third-party code at install time, so review the package provenance before installing. The SKILL.md includes an 'install' metadata block, while the registry said 'No install spec' — minor metadata inconsistency.
Credentials
Only OPENAI_API_KEY is required (OPENAI_BASE_URL is optional). These are appropriate for a tool that sends text to an OpenAI-compatible embeddings/LLM service. The skill stores data locally; it does not request unrelated credentials or config paths.
Persistence & Privilege
always is false and the skill is user-invocable. It creates a local working directory for storage under the user's home; it does not modify other skills or request elevated agent privileges.
Assessment
This skill appears to do what it claims: local indexing and querying via an OpenAI-compatible API. Before installing: (1) review the third-party PyPI package 'lightrag-hku' (source/repo) because pip installs run arbitrary code; (2) run the package in an isolated environment (virtualenv/container) if you want to limit blast radius; (3) keep your OPENAI_API_KEY secret and consider using a dedicated key with limited scope and billing alerts; (4) be careful when running index with a custom --workspace path since it can read and index files you point it at; (5) if you set OPENAI_BASE_URL, ensure it points to a trusted endpoint because embeddings and texts will be sent there. If you want extra assurance, review the source of 'lightrag-hku' and the LightRAG implementation before installing.

Like a lobster shell, security has layers — review code before you run it.

knowledge-graphvk970tkx4zy0m1a24pezzhgeswn84m5w1latestvk977cznm49npv3te2pkg2yyn5n84mey8memoryvk970tkx4zy0m1a24pezzhgeswn84m5w1ragvk970tkx4zy0m1a24pezzhgeswn84m5w1searchvk970tkx4zy0m1a24pezzhgeswn84m5w1

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Binspython3, pip
EnvOPENAI_API_KEY

Comments