Install
openclaw skills install ollama-memory-embeddingsConfigure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.
openclaw skills install ollama-memory-embeddingsThis skill configures OpenClaw memory search to use Ollama as the embeddings
server via its OpenAI-compatible /v1/embeddings endpoint.
Embeddings only. This skill does not affect chat/completions routing — it only changes how memory-search embedding vectors are generated.
~/.openclaw/skills/ollama-memory-embeddingsembeddinggemma (default — closest to OpenClaw built-in)nomic-embed-text (strong quality, efficient)all-minilm (smallest/fastest)mxbai-embed-large (highest quality, larger)ollama create (currently detects embeddinggemma, nomic-embed, all-minilm,
and mxbai-embed GGUFs in known cache directories):latest tag automatically)agents.defaults.memorySearch in OpenClaw config (surgical — only
touches keys this skill owns):
provider = "openai"model = <selected model>:latestremote.baseUrl = "http://127.0.0.1:11434/v1/"remote.apiKey = "ollama" (required by client, ignored by Ollama)openclaw gateway restart, systemd, launchd)openclaw memory index --force --verbose)ollama listenforce.sh)watchdog.sh)bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh
From this repository:
bash skills/ollama-memory-embeddings/install.sh
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
--non-interactive \
--model embeddinggemma \
--reindex-memory auto
Bulletproof setup (install watchdog):
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
--non-interactive \
--model embeddinggemma \
--reindex-memory auto \
--install-watchdog \
--watchdog-interval 60
Note: In non-interactive mode,
--import-local-gguf autois treated asno(safe default). Use--import-local-gguf yesto explicitly opt in.
Options:
--model <id>: one of embeddinggemma, nomic-embed-text, all-minilm, mxbai-embed-large--import-local-gguf <auto|yes|no>: default no (safer default; opt in with yes)--import-model-name <name>: default embeddinggemma-local--restart-gateway <yes|no>: default no (restart only when explicitly requested)--skip-restart: deprecated alias for --restart-gateway no--openclaw-config <path>: config file path override--install-watchdog: install launchd drift auto-heal watchdog (macOS)--watchdog-interval <sec>: watchdog interval (default 60)--reindex-memory <auto|yes|no>: memory rebuild mode (default auto)--dry-run: print planned changes and commands; make no modifications~/.openclaw/skills/ollama-memory-embeddings/verify.sh
Use --verbose to dump raw API response on failure:
~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose
Manually enforce desired state (safe to run repeatedly):
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
--model embeddinggemma \
--openclaw-config ~/.openclaw/openclaw.json
Check for drift only:
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
--check-only \
--model embeddinggemma
Run watchdog once (check + heal):
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
--once \
--model embeddinggemma
Install watchdog via launchd (macOS):
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
--install-launchd \
--model embeddinggemma \
--interval-sec 60
The installer searches for embedding GGUFs matching these patterns in known
cache directories (~/.node-llama-cpp/models, ~/.cache/node-llama-cpp/models,
~/.cache/openclaw/models):
*embeddinggemma*.gguf*nomic-embed*.gguf*all-minilm*.gguf*mxbai-embed*.ggufOther embedding GGUFs are not auto-detected. You can always import manually:
ollama create my-model -f /path/to/Modelfile
:latest tag for consistent Ollama interaction.--reindex-memory auto, installer reindexes only when the effective
embedding fingerprint changed (provider, model, baseUrl, apiKey presence)."ollama" value.agents.defaults.memorySearch is absent,
the enforcer reads known legacy paths and mirrors writes to preserve compatibility.