Agent Memory Tools

v1.0.0

Searches, stores, and manages agent memory across 4 sources (fact store, vector embeddings, BM25, knowledge graph). Runs 100% local via Ollama — no API keys,...

1· 116·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for primo-studio/agent-memory-tools.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Memory Tools" (primo-studio/agent-memory-tools) from ClawHub.
Skill page: https://clawhub.ai/primo-studio/agent-memory-tools
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agent-memory-tools

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-memory-tools
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the implementation: scripts provide fact extraction, multi-source recall (fact store, embeddings, BM25, graph), auto-ingest, and local-only operation via Ollama by default. No unexpected binaries or credentials are required for the default local flow.
Instruction Scope
SKILL.md instructs the agent to read markdown files in a workspace, extract facts, update embeddings, and optionally run watchers/daemons. That behavior matches the stated purpose. Note: the auto-ingest and graph-builder will read any files under the configured workspace paths (default watch dirs include memory, agents, projects, docs, notes); make sure the configured workspace is limited to the intended files.
Install Mechanism
There is no complex install spec in the registry; the included setup.sh uses official Ollama model pulls and (on Linux) the official https://ollama.com/install.sh script. Model pulls (ollama pull ...) are expected for a local LLM workflow. No obscure download hosts or URL shorteners are used.
Credentials
The skill declares no required env vars and defaults to local Ollama. However configs/presets explicitly support OpenAI/OpenRouter and a Convex backend: if a user enables those presets or sets convexUrl, the code will POST facts to that endpoint (fact_store uses curl subprocesses) or call remote APIs. These remote credentials are optional but powerful—ensure you only supply API keys/convexUrl to trusted endpoints and understand that stored facts may be transmitted when those backends are selected.
Persistence & Privilege
The skill does not request always:true and does not auto-register itself. It documents how to run auto_ingest as a daemon (LaunchAgent/systemd/Task Scheduler) but will not enable that automatically. If you follow those guides, the watcher will run periodically and re-ingest files in the configured workspace.
Assessment
This package appears to do what it says: it reads markdown in a configured workspace, extracts facts with a local LLM (Ollama) by default, stores them in local JSON, updates embeddings, and can rebuild a knowledge graph. Before installing or running it: 1) Set MEMORY_WORKSPACE or scripts/config.json paths to a directory that contains only files you want the tool to read; otherwise the watcher may scan large or sensitive folders. 2) Review scripts/config.json: Convex (convexUrl) and API presets (openai/openrouter) are optional but will send data to remote services if enabled—do not supply secrets or endpoints you don't trust. 3) setup.sh will attempt to install/run Ollama (it uses the official ollama.com installer and runs ollama pull for models). 4) If you enable auto-ingest as a system service (LaunchAgent/systemd/Task Scheduler), be aware it will run periodically and process changed files. 5) If you want to avoid any network transmission, stick with the default Ollama preset and local JSON backend and do not set convexUrl or API keys. Overall, the skill is coherent with its stated purpose; the main risk is accidental data transmission if you switch to remote presets or configure a remote convexUrl—review configuration before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk972xevb0ke2y4wyh5pc68jmt583h0ws
116downloads
1stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Agent Memory Tools

Multi-source memory recall and fact management. Runs locally via Ollama (0€).

Architecture

Question → unified_recall.py → fan-out 4 sources → merge → score → rerank → answer
                                 ├─ Fact store (Convex or local JSON)
                                 ├─ Vector embeddings (nomic)
                                 ├─ BM25 full-text (QMD)
                                 └─ Knowledge graph (JSON)

File changed → auto_ingest.py → extract facts → contradiction check → store
                               → update embeddings → rebuild graph

Setup

# Install Ollama models (one-time)
ollama pull gemma3:4b              # LLM (~2s/call)
ollama pull nomic-embed-text-v2-moe  # Embeddings

# Verify everything works
python3 scripts/selftest.py

Requirements: Python 3.9+, Ollama, curl. Optional: QMD CLI (bun install -g qmd).

Core Scripts

Search memory

# Unified recall — recommended (all 4 sources, scored + reranked)
python3 scripts/unified_recall.py "What bugs happened last week?" --debug

# Multi-hop reasoning (chains searches with LLM synthesis)
python3 scripts/multihop_search.py "How does the deploy pipeline work?" --embed

# Temporal decay (recent facts score higher, errors protected)
python3 scripts/decay_search.py "recent issues" --half-life 14

Extract and store facts

# Extract from text
python3 scripts/extract_facts.py "Some conversation or document" --store --debug

# Extract from file
python3 scripts/extract_facts.py --file path/to/doc.md --store

# Pipe from stdin
cat summary.md | python3 scripts/extract_facts.py --store

Facts are checked for contradictions locally (gemma3, ~2s) before storage. Categories: knowledge, error, timeline, preference, tool, client, hr.

Auto-ingest workspace changes

python3 scripts/auto_ingest.py --scan          # One-shot: process modified .md files
python3 scripts/auto_ingest.py --watch          # Daemon: poll for changes every 30s
python3 scripts/auto_ingest.py --file doc.md    # Single file

Dedup by content hash + 5 min cooldown. Triggers: fact extraction → storage → embed cache update → graph rebuild.

Build knowledge graph

python3 scripts/knowledge_graph.py              # Full rebuild
python3 scripts/knowledge_graph.py --dry-run    # Preview without writing

Graph stored at .cache/knowledge-graph.json. Auto-rebuilt incrementally by auto_ingest.py.

Run tests

python3 scripts/tests.py    # 28 unit tests

Configuration

Edit scripts/config.json. See references/configuration.md for full guide.

Storage backend — auto-detected:

  • convexUrl set → uses Convex (agentMemory API)
  • No convexUrl → uses local .cache/agent-facts.json

Model presets — switch LLM/embeddings provider in one flag:

python3 scripts/unified_recall.py "query" --preset ollama      # Default
python3 scripts/unified_recall.py "query" --preset lmstudio
python3 scripts/unified_recall.py "query" --preset openai

Per-script model override — in config.jsonscriptOverrides:

"scriptOverrides": {
  "recall":  { "llm": { "model": "gemma3:4b", "apiFormat": "ollama" } },
  "extract": { "llm": { "model": "gemma3:4b", "apiFormat": "ollama" } }
}

Recommended models by RAM:

RAMLLMEmbeddings
4 GBgemma3:1bnomic-embed-text
8 GBgemma3:4bnomic-embed-text-v2-moe
16+ GBqwen3.5:27bnomic-embed-text-v2-moe

⚠ Avoid Qwen 3.5 for JSON tasks — outputs to "thinking" field instead of response.

Platform auto-trigger

PlatformMethod
macOSLaunchAgent with WatchPaths
Linuxsystemd timer or cron
WindowsTask Scheduler

See references/configuration.md for examples.

File Structure

scripts/
├── unified_recall.py      # Multi-source search + scoring + synthesis
├── extract_facts.py       # Fact extraction + contradiction check + storage
├── auto_ingest.py         # File watcher / scanner pipeline
├── multihop_search.py     # Chained reasoning search
├── decay_search.py        # Time-weighted search
├── knowledge_graph.py     # Entity/relationship graph builder
├── fact_store.py          # Storage abstraction (Convex / local JSON)
├── llm_client.py          # LLM/embedding client (Ollama/LM Studio/OpenAI)
├── selftest.py            # Setup validation
├── tests.py               # Unit tests (28)
└── config.json            # Configuration + presets
references/
└── configuration.md       # Full configuration guide

Comments

Loading comments...