Ollama Memory Setup

v1.0.0

Sets up local semantic memory search for OpenClaw using Ollama + nomic-embed-text. Use when: (1) memory_search returns 'node-llama-cpp is missing' or 'Local...

0· 137·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for brasco05/ollama-memory-setup.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ollama Memory Setup" (brasco05/ollama-memory-setup) from ClawHub.
Skill page: https://clawhub.ai/brasco05/ollama-memory-setup
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ollama-memory-setup

ClawHub CLI

Package manager switcher

npx clawhub@latest install ollama-memory-setup
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included files and instructions: the script installs/starts Ollama, pulls the nomic-embed-text model, and updates OpenClaw's memorySearch config. All requested actions align with enabling local embeddings.
Instruction Scope
SKILL.md and setup.sh are scoped to installing/starting Ollama, loading the embedding model, and configuring OpenClaw. The instructions only touch OpenClaw config (expected) and localhost:11434; they do not read or transmit unrelated system files or environment variables.
Install Mechanism
Installation uses brew on macOS or runs https://ollama.com/install.sh via curl | sh on Linux. Using the vendor's official install script is reasonable for this purpose, but executing remote install scripts is higher-risk than package-managed installs — review the install script if you want to be cautious.
Credentials
The skill declares no environment variables, no credentials, and no sensitive config paths. The OpenClaw config changes are proportional to enabling memory search and are justified by the stated goal.
Persistence & Privilege
The skill is not always-enabled and does not request persistent elevated privileges. It modifies only OpenClaw's agent memorySearch settings (its intended target) and starts a local Ollama process; nothing indicates it alters other skills or global agent policies.
Assessment
This skill appears to do what it says: install/start Ollama, pull the nomic-embed-text embedding model, and set OpenClaw to use Ollama on localhost. Before running it: (1) review the remote installer (https://ollama.com/install.sh) if you are uncomfortable with curl | sh, (2) ensure you trust Ollama and are okay with downloading ~270MB for the model, (3) back up your OpenClaw config if desired (the script runs openclaw config set commands), and (4) run the script interactively rather than blindly pasting into a root shell so you can inspect output and abort on unexpected behavior. No API keys or other credentials are requested by this skill.

Like a lobster shell, security has layers — review code before you run it.

latestvk970xfey8dde17n2enfkbbat2n83t04t
137downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Ollama Memory Setup

Enables semantic memory search in OpenClaw using Ollama locally — no API keys, no cloud, fully private.

Wann verwenden?

Nutze diesen Skill wenn memory_search folgende Fehler wirft:

  • node-llama-cpp is missing (or failed to install)
  • Local embeddings unavailable
  • Cannot find package 'node-llama-cpp'
  • optional dependency node-llama-cpp is missing

Oder wenn du Embeddings lokal halten willst ohne externe APIs (OpenAI, Gemini, Voyage).

Verwendung

Automatisch (empfohlen)

# Setup-Script ausführen
bash ~/.openclaw/workspace/skills/ollama-memory-setup/scripts/setup.sh

# OpenClaw neu starten
openclaw gateway restart

Manuell (Schritt für Schritt)

# 1. Ollama installieren
brew install ollama                    # macOS
curl -fsSL https://ollama.com/install.sh | sh  # Linux

# 2. Ollama starten (macOS: als Service, startet automatisch)
brew services start ollama

# 3. Embedding-Modell laden (~270MB, einmalig)
ollama pull nomic-embed-text

# 4. OpenClaw konfigurieren
openclaw config set agents.defaults.memorySearch.provider ollama
openclaw config set agents.defaults.memorySearch.model nomic-embed-text
openclaw config set agents.defaults.memorySearch.remote.baseUrl http://localhost:11434
openclaw config set agents.defaults.memorySearch.enabled true

# 5. Neu starten
openclaw gateway restart

Aufstellen

Keine API-Keys nötig. Voraussetzungen:

  • macOS: Homebrew installiert (brew --version)
  • Linux: curl installiert, systemd empfohlen
  • Ollama Version: >= 0.18.0
  • Speicher: ~300MB für das nomic-embed-text Modell

Verifizieren

Nach dem Neustart in einer frischen Session testen:

memory_search("test")

Erwartete Antwort enthält "provider": "ollama" — nicht disabled: true.

Warum nomic-embed-text?

nomic-embed-text ist ein spezialisiertes Embedding-Modell (nicht für Chat):

  • Klein (~270MB vs. mehrere GB für Chat-Modelle)
  • Schnell (~50ms pro Anfrage auf moderner Hardware)
  • Hohe Qualität für semantische Suche
  • Kostenlos, Open Source (Apache 2.0)

Alternativer Modellname für ältere Ollama-Versionen: nomic-embed-text:latest

Fehlersuche

Siehe references/troubleshooting.md für häufige Probleme wie:

  • Ollama startet nicht
  • memory_search bleibt deaktiviert nach Setup
  • macOS: Ollama stoppt nach Neustart
  • Linux: Systemd-Service einrichten

Comments

Loading comments...