LightRAG Memory

v1.0.1

LightRAG-based semantic memory system for AI agents. Provides efficient long-term knowledge storage and retrieval using vector embeddings and knowledge graph...

1· 118·1 current·1 all-time
byGorikon@ogi98rus

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ogi98rus/lightrag-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "LightRAG Memory" (ogi98rus/lightrag-memory) from ClawHub.
Skill page: https://clawhub.ai/ogi98rus/lightrag-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: OPENAI_API_KEY
Required binaries: python3, pip
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lightrag-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install lightrag-memory
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, required binaries (python3, pip), required env var (OPENAI_API_KEY), and included files (requirements.txt, scripts/rag.py) all align with a semantic memory/embedding tool that calls an OpenAI-compatible API.
Instruction Scope
Instructions and the script operate on MEMORY.md and memory/*.md by default and store data under ~/.openclaw/workspace/lightrag_storage; the index command accepts a workspace path so a user could point it at any directory (and thus index other files) if they run it with a custom path. The code loads an optional .env from the skill directory and uses only OPENAI_API_KEY/OPENAI_BASE_URL for remote calls.
Install Mechanism
No installer that downloads arbitrary archives is present; install is via pip -r requirements.txt which will pull PyPI packages (including an external package named 'lightrag-hku'). Installing pip packages executes third-party code at install time, so review the package provenance before installing. The SKILL.md includes an 'install' metadata block, while the registry said 'No install spec' — minor metadata inconsistency.
Credentials
Only OPENAI_API_KEY is required (OPENAI_BASE_URL is optional). These are appropriate for a tool that sends text to an OpenAI-compatible embeddings/LLM service. The skill stores data locally; it does not request unrelated credentials or config paths.
Persistence & Privilege
always is false and the skill is user-invocable. It creates a local working directory for storage under the user's home; it does not modify other skills or request elevated agent privileges.
Assessment
This skill appears to do what it claims: local indexing and querying via an OpenAI-compatible API. Before installing: (1) review the third-party PyPI package 'lightrag-hku' (source/repo) because pip installs run arbitrary code; (2) run the package in an isolated environment (virtualenv/container) if you want to limit blast radius; (3) keep your OPENAI_API_KEY secret and consider using a dedicated key with limited scope and billing alerts; (4) be careful when running index with a custom --workspace path since it can read and index files you point it at; (5) if you set OPENAI_BASE_URL, ensure it points to a trusted endpoint because embeddings and texts will be sent there. If you want extra assurance, review the source of 'lightrag-hku' and the LightRAG implementation before installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3, pip
EnvOPENAI_API_KEY
latestvk977cznm49npv3te2pkg2yyn5n84mey8
118downloads
1stars
2versions
Updated 2w ago
v1.0.1
MIT-0

LightRAG Memory

Semantic memory system with vector search + knowledge graph. Replaces reading entire memory files on every request with targeted retrieval (~1-3K tokens vs 30K+).

Quick Setup

cd skills/lightrag-memory
pip install -r requirements.txt

Set environment variables:

export OPENAI_API_KEY="your-key"
export OPENAI_BASE_URL="https://your-api-endpoint/v1"  # optional, defaults to OpenAI

Or create a .env file in the skill directory.

Commands

Index memory files

Index MEMORY.md and memory/*.md from workspace:

python3 scripts/rag.py index

Insert content

# From file
python3 scripts/rag.py insert --file /path/to/file.md --source "filename"

# From text
python3 scripts/rag.py insert --text "Important fact" --source "manual"

# From stdin
echo "Some text" | python3 scripts/rag.py insert --source "stdin"

Query

# Hybrid search (best results, costs more API calls)
python3 scripts/rag.py query "What do I know about the user?" --mode hybrid

# Local search (entities + relationships, balanced)
python3 scripts/rag.py query "What projects were discussed?" --mode local

# Naive search (simple vector lookup, cheapest)
python3 scripts/rag.py query "Any notes about deployment?" --mode naive

# Global search (broad context, expensive)
python3 scripts/rag.py query "Summarize everything" --mode global

Query Modes

ModeWhat it searchesCostBest for
naiveVector embeddings onlyLowestQuick fact lookup
localEntities + relationshipsLowSpecific entities
globalCommunity-level contextHighBroad understanding
hybridLocal + globalHighestComprehensive answers

Storage

Data stored in ~/.openclaw/workspace/lightrag_storage/ by default. Override with LIGHTARG_WORKING_DIR env var.

Integration Pattern

For agent memory systems, index files on change and query on demand:

# Check if reindex needed (files modified since last index)
find MEMORY.md memory/ -name '*.md' -newer memory/lightrag-last-index.txt

# Reindex if needed
python3 scripts/rag.py index
touch memory/lightrag-last-index.txt

# Query when context needed
python3 scripts/rag.py query "user preferences" --mode naive

Architecture

  • Embeddings: text-embedding-3-small (1536 dim) via OpenAI-compatible API
  • LLM: gpt-4o-mini for entity extraction and answer generation
  • Storage: JSON-based vector DB + GraphML knowledge graph
  • Batching: 8 items per embedding batch, 4 concurrent async requests

Troubleshooting

OPENAI_API_KEY not set: Ensure env vars are exported or .env exists.

numpy RuntimeError (X86_V2): On older CPUs lacking AVX2, install pip install "numpy<2.0".

Slow first index: Initial indexing processes all files. Subsequent updates are incremental.

Reset storage: rm -rf <storage_dir>/* && python3 scripts/rag.py index

Comments

Loading comments...