Ollama Memory Embeddings

v1.0.4

Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.

5· 1.7k·6 current·7 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The scripts and SKILL.md are coherent with the stated purpose (switch OpenClaw memory embeddings to Ollama). They read/write the OpenClaw config, verify Ollama on localhost, optionally import local GGUFs, and offer restart/watchdog behavior. One inconsistency: the registry metadata reports "required binaries: none" / "instruction-only", but the packaged scripts clearly require node, curl, and the ollama CLI (and optionally openclaw, launchctl/systemctl). This is likely a metadata omission rather than malicious behavior, but you should expect those tools to be present.
Instruction Scope
Runtime instructions and scripts stay within the stated scope: they read/write ~/.openclaw/openclaw.json (and backups), scan a small set of local cache directories for GGUFs, call the local Ollama HTTP endpoint (127.0.0.1:11434), and may run 'ollama create' to import a model. The default behavior is conservative (no GGUF import unless opted-in, no gateway restart unless requested). Nothing in SKILL.md or the scripts instructs reading or transmitting unrelated secrets or contacting external network hosts.
Install Mechanism
There is no network-based installer: the repository includes install, verify, enforce, watchdog, audit, and uninstall scripts plus Node helper. No downloads from external URLs or package registry pulls are performed by the scripts. They assume required local tools are installed. This is low-risk compared to fetching and executing arbitrary remote archives.
Credentials
The skill requests no environment variables or credentials from the registry. The code sets the OpenClaw memorySearch.remote.apiKey to a non-secret sentinel value (default 'ollama') as required by the client, and the enforcement tools treat apiKey presence (non-empty) as sufficient. No sensitive system credentials or unrelated tokens are requested or transmitted. The scripts do read local config and model cache directories, which is appropriate for the task.
Persistence & Privilege
The skill does not request always:true and is not force-included. Persistence is optional and explicit: watchdog.sh can install a user-level LaunchAgents plist on macOS (or you can run the watchdog via cron/systemd on Linux). The installer writes files under the user's home (~/.openclaw and ~/Library/LaunchAgents) and creates logs there; it does not modify other skills or system-wide settings beyond user-level launchd/systemd guidance. Restarting the OpenClaw gateway is optional and requires either the openclaw CLI (if present) or manual action.
Assessment
This package looks coherent with its description, but a few practical things to check before installing: - Prerequisites: ensure node, curl, and the ollama CLI are installed and trusted on your machine (the repository metadata incorrectly lists no required binaries). openclaw is optional but needed if you want automatic gateway restart. - Review & dry-run: run install.sh --dry-run to preview changes; verify.sh to test the local embeddings endpoint. - Files touched: the installer will read/write ~/.openclaw/openclaw.json (backups are made before writes) and may create ~/Library/LaunchAgents/*.plist and logs under ~/.openclaw/logs if you opt into the watchdog—these are user-level files. - GGUF import: scanning and importing local GGUFs is opt-in. Do not use --import-local-gguf yes unless you trust the GGUF files on disk. - Persistence: watchdog installation is explicit; uninstall.sh and watchdog.sh --uninstall-launchd provide ways to revert. - Safety: do not run these scripts as root. Inspect install.sh, enforce.sh, and watchdog.sh yourself (they are included). If you want extra assurance, run them in a controlled environment or container first. If you want, I can point out the exact commands the scripts will run in a dry-run or summarize the lines that write your OpenClaw config and the launchd plist.

Like a lobster shell, security has layers — review code before you run it.

latestvk978kjffya29gry7ppz0s9wx0d81326x
1.7kdownloads
5stars
5versions
Updated 1mo ago
v1.0.4
MIT-0

Ollama Memory Embeddings

This skill configures OpenClaw memory search to use Ollama as the embeddings server via its OpenAI-compatible /v1/embeddings endpoint.

Embeddings only. This skill does not affect chat/completions routing — it only changes how memory-search embedding vectors are generated.

What it does

  • Installs this skill under ~/.openclaw/skills/ollama-memory-embeddings
  • Verifies Ollama is installed and reachable
  • Lets the user choose an embedding model:
    • embeddinggemma (default — closest to OpenClaw built-in)
    • nomic-embed-text (strong quality, efficient)
    • all-minilm (smallest/fastest)
    • mxbai-embed-large (highest quality, larger)
  • Optionally imports an existing local embedding GGUF into Ollama via ollama create (currently detects embeddinggemma, nomic-embed, all-minilm, and mxbai-embed GGUFs in known cache directories)
  • Normalizes model names (handles :latest tag automatically)
  • Updates agents.defaults.memorySearch in OpenClaw config (surgical — only touches keys this skill owns):
    • provider = "openai"
    • model = <selected model>:latest
    • remote.baseUrl = "http://127.0.0.1:11434/v1/"
    • remote.apiKey = "ollama" (required by client, ignored by Ollama)
  • Performs a post-write config sanity check (reads back and validates JSON)
  • Optionally restarts the OpenClaw gateway (with detection of available restart methods: openclaw gateway restart, systemd, launchd)
  • Optional memory reindex during install (openclaw memory index --force --verbose)
  • Runs a two-step verification:
    1. Checks model exists in ollama list
    2. Calls the embeddings endpoint and validates the response
  • Adds an idempotent drift-enforcement command (enforce.sh)
  • Adds optional config drift auto-healing watchdog (watchdog.sh)

Install

bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh

From this repository:

bash skills/ollama-memory-embeddings/install.sh

Non-interactive usage

bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
  --non-interactive \
  --model embeddinggemma \
  --reindex-memory auto

Bulletproof setup (install watchdog):

bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
  --non-interactive \
  --model embeddinggemma \
  --reindex-memory auto \
  --install-watchdog \
  --watchdog-interval 60

Note: In non-interactive mode, --import-local-gguf auto is treated as no (safe default). Use --import-local-gguf yes to explicitly opt in.

Options:

  • --model <id>: one of embeddinggemma, nomic-embed-text, all-minilm, mxbai-embed-large
  • --import-local-gguf <auto|yes|no>: default no (safer default; opt in with yes)
  • --import-model-name <name>: default embeddinggemma-local
  • --restart-gateway <yes|no>: default no (restart only when explicitly requested)
  • --skip-restart: deprecated alias for --restart-gateway no
  • --openclaw-config <path>: config file path override
  • --install-watchdog: install launchd drift auto-heal watchdog (macOS)
  • --watchdog-interval <sec>: watchdog interval (default 60)
  • --reindex-memory <auto|yes|no>: memory rebuild mode (default auto)
  • --dry-run: print planned changes and commands; make no modifications

Verify

~/.openclaw/skills/ollama-memory-embeddings/verify.sh

Use --verbose to dump raw API response on failure:

~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose

Drift enforcement and auto-heal

Manually enforce desired state (safe to run repeatedly):

~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
  --model embeddinggemma \
  --openclaw-config ~/.openclaw/openclaw.json

Check for drift only:

~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
  --check-only \
  --model embeddinggemma

Run watchdog once (check + heal):

~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
  --once \
  --model embeddinggemma

Install watchdog via launchd (macOS):

~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
  --install-launchd \
  --model embeddinggemma \
  --interval-sec 60

GGUF detection scope

The installer searches for embedding GGUFs matching these patterns in known cache directories (~/.node-llama-cpp/models, ~/.cache/node-llama-cpp/models, ~/.cache/openclaw/models):

  • *embeddinggemma*.gguf
  • *nomic-embed*.gguf
  • *all-minilm*.gguf
  • *mxbai-embed*.gguf

Other embedding GGUFs are not auto-detected. You can always import manually:

ollama create my-model -f /path/to/Modelfile

Notes

  • This does not modify OpenClaw package code. It only updates user config.
  • A timestamped backup of config is written before changes.
  • If no local GGUF exists, install proceeds by pulling the selected model from Ollama.
  • Model names are normalized with :latest tag for consistent Ollama interaction.
  • If embedding model changes, rebuild/re-embed existing memory vectors to avoid retrieval mismatch across incompatible vector spaces.
  • With --reindex-memory auto, installer reindexes only when the effective embedding fingerprint changed (provider, model, baseUrl, apiKey presence).
  • Drift checks require a non-empty apiKey but do not require a literal "ollama" value.
  • Config backups are created only when a write is needed.
  • Legacy schema fallback is supported: if agents.defaults.memorySearch is absent, the enforcer reads known legacy paths and mirrors writes to preserve compatibility.

Comments

Loading comments...