Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Graph-RAG Memory

v0.1.0

Graph-RAG memory system using Graphiti temporal knowledge graph + FalkorDB + local Ollama embeddings. Provides persistent, queryable long-term memory for Ope...

0· 79·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jebadiahgreenwood/graph-rag-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Graph-RAG Memory" (jebadiahgreenwood/graph-rag-memory) from ClawHub.
Skill page: https://clawhub.ai/jebadiahgreenwood/graph-rag-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install graph-rag-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install graph-rag-memory
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the code and instructions: the package wires Graphiti + FalkorDB + Ollama embeddings, exposes ingest/query/status scripts, and includes an installer. Required services (FalkorDB, Ollama) and Python packages are exactly what this memory system needs.
!
Instruction Scope
Runtime instructions and scripts read and write files in the user workspace (/home/node/.openclaw/workspace by default), ingest arbitrary workspace documents, and run network checks against local Ollama/FalkorDB endpoints. The installer patches the global OpenClaw config (~/.openclaw/openclaw.json), and the included C daemon (memwatchd) watches workspace files and runs a refresh script on changes. The watcher invokes memory-upgrade/graph_refresh.py when files change, but graph_refresh.py is not present in the provided manifest — this is an incoherence and could break or cause unexpected behavior.
Install Mechanism
There is no formal package install spec, but install.sh performs numerous system actions: pip installs (via get-pip.py if needed), pulls Ollama models using docker exec, builds a C program with gcc, patches OpenClaw config, seeds the graph, creates vector index, and may create a cron job. These steps are typical for this kind of system but should be run manually or inspected first because they change system state.
!
Credentials
The skill declares no required env vars but implicitly depends on and modifies environment/config files: it reads OPENCLAW_WORKSPACE (default /home/node/.openclaw/workspace), edits ~/.openclaw/openclaw.json, and expects local Ollama/FalkorDB endpoints (172.18.0.1:11436/6379). While these are proportionate to a local memory system, the skill's operations touch global agent configuration and workspace files — broader access than a pure 'helper' script and worth review.
!
Persistence & Privilege
The installer patches the OpenClaw config and the install script advertises creating a 5-minute cron job; it also builds/starts a memwatchd daemon that automatically executes a refresh script on workspace file changes. That gives the skill long-term active presence and ability to run code on file changes. Although persistence is plausible for a memory service, modifying global agent config and installing a watcher/cron are high-impact operations and should be consented to explicitly by the user.
What to consider before installing
What to check before installing: - Backup ~/.openclaw/openclaw.json and any important workspace files. install.sh patches openclaw.json automatically. - Inspect graph_refresh.py (the memwatchd daemon calls memory-upgrade/graph_refresh.py on changes). The provided package does not include that file in the manifest — confirm what the refresh script does before running the watcher/installer. - Review install.sh and run it with --dry-run first; it will pull Ollama models (via docker exec), install Python packages, build a C daemon, and may create cron jobs. - If you don't want automatic, persistent behavior: do not start memwatchd or allow the script to patch configs/cron; instead run ingest/query scripts manually in a sandboxed environment. - Run the installer in an isolated/test environment (or container) first so you can observe changes and network activity (it communicates with local endpoints by default: 172.18.0.1 for Ollama/FalkorDB). - Confirm the Ollama/FalkorDB endpoints the skill uses are local and trusted; if these addresses are reachable beyond your host network, treat with caution. Why I marked this suspicious: the core functionality and dependencies align with the description, but the skill modifies global OpenClaw configuration, installs a persistent daemon/cron that executes workspace scripts, and references a refresh script not included in the package — this combination increases the attack surface and is incoherent until the missing refresh script is inspected. If you want to proceed, verify the missing file, run installs manually, and prefer running the system in an isolated environment.

Like a lobster shell, security has layers — review code before you run it.

embeddingsvk9726j0ypsm2bfjyp3gggaqeh98477jzfalkordbvk9726j0ypsm2bfjyp3gggaqeh98477jzgraphitivk9726j0ypsm2bfjyp3gggaqeh98477jzknowledge-graphvk9726j0ypsm2bfjyp3gggaqeh98477jzlatestvk9726j0ypsm2bfjyp3gggaqeh98477jzlocal-llmvk9726j0ypsm2bfjyp3gggaqeh98477jzmemoryvk9726j0ypsm2bfjyp3gggaqeh98477jzmoevk9726j0ypsm2bfjyp3gggaqeh98477jzollamavk9726j0ypsm2bfjyp3gggaqeh98477jzragvk9726j0ypsm2bfjyp3gggaqeh98477jz
79downloads
0stars
1versions
Updated 3w ago
v0.1.0
MIT-0

Graph-RAG Memory Skill

Persistent, queryable agent memory via a temporal knowledge graph. Facts are extracted from episodes (conversations, documents, notes), stored as typed entities and relationships in FalkorDB, and retrieved via hybrid BM25 + cosine similarity search with domain-expert routing.

Architecture Overview

Write path:  content → DomainRouter → expert embedder → Graphiti.add_episode()
                                             ↓
                                      FalkorDB (workspace graph)
                                      39+ nodes, 73+ RELATES_TO edges
                                      fact_embedding: 768-dim cosine index

Read path:   query → DomainRouter → expert embedder → query_vector
                                             ↓
                                    graphiti_search() [BM25 + cosine RRF]
                                             ↓
                                    ranked EntityEdge objects with .fact

Routing layers:

  1. Hard routing (metadata/source_type → domain, confidence=1.0)
  2. Centroid routing (cosine similarity to domain centroids, threshold=0.02)
  3. Fanout fallback (parallel expert queries + RRF fusion)

Domains: personal, episodic, project, technical, research, meta, general

Prerequisites

See references/setup.md for full installation and environment details.

Quick check:

# Verify services (write to a temp script, don't use python3 -c inline)
import falkordb, httpx
r = falkordb.FalkorDB(host='172.18.0.1', port=6379)
print("FalkorDB OK:", r.list_graphs())
# nomic-embed-text must be loaded on NVIDIA Ollama

Python packages (reinstall after container restart — ephemeral layer):

export PATH=$PATH:/home/node/.local/bin
curl -sS https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py
python3 /tmp/get-pip.py --user --break-system-packages
pip3 install --user --break-system-packages graphiti-core falkordb sentence-transformers

File Layout

All skill scripts live at: memory-upgrade/ (workspace root)

memory-upgrade/
  config.py             # Service URLs + model names
  embedder.py           # OllamaEmbedderClient + expert registry
  router.py             # DomainRouter (hard + centroid + fanout)
  setup_graphiti.py     # Graphiti factory (defaults to 'workspace' graph)
  write_path.py         # ingest_memory(), ingest_workspace_memories()
  read_path.py          # query_memory() — hybrid BM25+vector
  phase3_ingest.py      # Seed ingestion (checkpoint-aware, re-runnable)
  phase4_query_test.py  # Read path validation (7 test queries)
  phase6_full_ingest.py # Full workspace ingestion + centroid recalibration
  checkpoints/          # Phase state (JSON, safe to re-run)
  scripts/              # Skill scripts (install, ingest, query, status)

Common Tasks

Query memory

# Write to a .py file, then run it
import asyncio, sys
sys.path.insert(0, '/path/to/memory-upgrade')
from setup_graphiti import init_graphiti
from read_path import query_memory
from router import DomainRouter

async def main():
    g = await init_graphiti("workspace")
    router = DomainRouter(ollama_base_url="http://172.18.0.1:11436")
    edges, routing = await query_memory(g, router, "your question here",
                                         group_ids=["workspace"], limit=5)
    for e in edges:
        print(e.fact)
    await g.close()

asyncio.run(main())

Or use the convenience script:

python3 memory-upgrade/scripts/query_memory.py "your question here"

Ingest new content

python3 memory-upgrade/scripts/ingest.py --file path/to/file.md --domain project
python3 memory-upgrade/scripts/ingest.py --text "Jebadiah decided X because Y" --domain personal

Check system status

python3 memory-upgrade/scripts/status.py

Re-seed from workspace memory files

python3 memory-upgrade/phase3_ingest.py    # daily notes + MEMORY.md
python3 memory-upgrade/phase6_full_ingest.py  # broader workspace docs

Configuration

Edit memory-upgrade/config.py to change endpoints or models:

OLLAMA_URL     = "http://172.18.0.1:11436"   # NVIDIA — embeddings
AMD_OLLAMA_URL = "http://172.18.0.1:11437"   # AMD — LLM (gemma4:e4b)
LLM_MODEL      = "gemma4:e4b"                # entity extraction LLM
EMBED_GENERAL  = "nomic-embed-text"          # 768-dim general embedder

Known Gotchas

  • Data graph name = group_id: Graphiti names the FalkorDB graph after the group_id passed to add_episode(). Always use group_id="workspace" and init_graphiti("workspace").
  • sim_min_score must be 0.0: The default 0.6 blocks almost all results. Always set to 0.0.
  • No python3 -c inline: OpenClaw's obfuscation detector fires on it. Write to a temp file.
  • Packages reinstall needed: /home/node/.local is ephemeral. Re-run pip install after restart.
  • Vector index: Created in Phase 5. If the workspace graph is reset, re-run phase5_vector_index.py.

Research Foundations

See references/research.md for full citations. Key papers:

  • RouterRetriever (Zhuang et al., AAAI 2025) — centroid-based expert routing
  • Graphiti (Rasmussen et al., 2024) — temporal knowledge graph for agents
  • MoE routing literature — confidence thresholding + fanout fusion

ClawHub Publishing

See references/clawhub.md for packaging and publishing instructions.

Comments

Loading comments...