Agent Memory Tools

v1.0.0

Searches, stores, and manages agent memory across 4 sources (fact store, vector embeddings, BM25, knowledge graph). Runs 100% local via Ollama — no API keys,...

1· 62·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the implementation: scripts provide fact extraction, multi-source recall (fact store, embeddings, BM25, graph), auto-ingest, and local-only operation via Ollama by default. No unexpected binaries or credentials are required for the default local flow.
Instruction Scope
SKILL.md instructs the agent to read markdown files in a workspace, extract facts, update embeddings, and optionally run watchers/daemons. That behavior matches the stated purpose. Note: the auto-ingest and graph-builder will read any files under the configured workspace paths (default watch dirs include memory, agents, projects, docs, notes); make sure the configured workspace is limited to the intended files.
Install Mechanism
There is no complex install spec in the registry; the included setup.sh uses official Ollama model pulls and (on Linux) the official https://ollama.com/install.sh script. Model pulls (ollama pull ...) are expected for a local LLM workflow. No obscure download hosts or URL shorteners are used.
Credentials
The skill declares no required env vars and defaults to local Ollama. However configs/presets explicitly support OpenAI/OpenRouter and a Convex backend: if a user enables those presets or sets convexUrl, the code will POST facts to that endpoint (fact_store uses curl subprocesses) or call remote APIs. These remote credentials are optional but powerful—ensure you only supply API keys/convexUrl to trusted endpoints and understand that stored facts may be transmitted when those backends are selected.
Persistence & Privilege
The skill does not request always:true and does not auto-register itself. It documents how to run auto_ingest as a daemon (LaunchAgent/systemd/Task Scheduler) but will not enable that automatically. If you follow those guides, the watcher will run periodically and re-ingest files in the configured workspace.
Assessment
This package appears to do what it says: it reads markdown in a configured workspace, extracts facts with a local LLM (Ollama) by default, stores them in local JSON, updates embeddings, and can rebuild a knowledge graph. Before installing or running it: 1) Set MEMORY_WORKSPACE or scripts/config.json paths to a directory that contains only files you want the tool to read; otherwise the watcher may scan large or sensitive folders. 2) Review scripts/config.json: Convex (convexUrl) and API presets (openai/openrouter) are optional but will send data to remote services if enabled—do not supply secrets or endpoints you don't trust. 3) setup.sh will attempt to install/run Ollama (it uses the official ollama.com installer and runs ollama pull for models). 4) If you enable auto-ingest as a system service (LaunchAgent/systemd/Task Scheduler), be aware it will run periodically and process changed files. 5) If you want to avoid any network transmission, stick with the default Ollama preset and local JSON backend and do not set convexUrl or API keys. Overall, the skill is coherent with its stated purpose; the main risk is accidental data transmission if you switch to remote presets or configure a remote convexUrl—review configuration before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk972xevb0ke2y4wyh5pc68jmt583h0ws

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments