Ollama Local

Manage and use local Ollama models. Use for model management (list/pull/remove), chat/completions, embeddings, and tool-use with local LLMs. Covers OpenClaw sub-agent integration and model selection guidance.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
8 · 4.2k · 39 current installs · 42 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description align with included files: scripts implement model listing, pulling/removal, chat/generate/embeddings, and a tool-use loop. All requested capabilities are coherent with a local Ollama integration.
Instruction Scope
SKILL.md and scripts stick to the Ollama HTTP API and the skill's own scripts. The doc mentions creating an OpenClaw auth profile (`ollama:default`) and shows how to spawn sub-agents; these are guidance items that could lead users to edit OpenClaw config, but the skill does not itself instruct reading arbitrary system files or exfiltrating unrelated data.
Install Mechanism
No install spec or external downloads are present; this is an instruction-only skill with helper scripts. No archives or third-party package installs are performed by the skill.
!
Credentials
Metadata declares no required env vars, but SKILL.md and the scripts expect OLLAMA_HOST (defaulting to http://localhost:11434). That mismatch is a minor inconsistency. More importantly, the scripts will send model inputs and tool interactions to the address in OLLAMA_HOST — if you set that to a remote/untrusted host, user-provided content (and model/tool calls) will be transmitted off-host.
Persistence & Privilege
The skill does not request persistent/always-on privileges and does not modify other skills or system-wide configs itself. It is user-invocable and uses the normal agent invocation model.
Assessment
This skill appears to do what it says: local Ollama model management and tool-enabled inference. Before installing, check these points: (1) The scripts use an OLLAMA_HOST environment variable but the metadata does not declare it — make sure OLLAMA_HOST points to a trusted local host (default) and not an untrusted remote server, because all chat/generate/embed requests (and any tool-call content) will be sent to that host. (2) The SKILL.md suggests adding an OpenClaw auth profile (a harmless placeholder), which may prompt you to edit OpenClaw config — only do that if you understand the change. (3) The included run_code tool is a simulated implementation (it does not execute arbitrary code), but if you adapt the script be careful not to add real remote code execution. (4) There is no installer, so review the Python scripts before running them. If you want higher assurance, request that the publisher declare required env vars (OLLAMA_HOST) in metadata and confirm whether any OpenClaw config changes will be made automatically or must be done manually.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.1.0
Download zip
aivk97ejgapv7zn5bhd4ptbvcvczs80ext7latestvk97ejgapv7zn5bhd4ptbvcvczs80ext7llmvk97ejgapv7zn5bhd4ptbvcvczs80ext7localvk97ejgapv7zn5bhd4ptbvcvczs80ext7modelsvk97ejgapv7zn5bhd4ptbvcvczs80ext7ollamavk97ejgapv7zn5bhd4ptbvcvczs80ext7tool-usevk97ejgapv7zn5bhd4ptbvcvczs80ext7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Ollama Local

Work with local Ollama models for inference, embeddings, and tool use.

Configuration

Set your Ollama host (defaults to http://localhost:11434):

export OLLAMA_HOST="http://localhost:11434"
# Or for remote server:
export OLLAMA_HOST="http://192.168.1.100:11434"

Quick Reference

# List models
python3 scripts/ollama.py list

# Pull a model
python3 scripts/ollama.py pull llama3.1:8b

# Remove a model
python3 scripts/ollama.py rm modelname

# Show model details
python3 scripts/ollama.py show qwen3:4b

# Chat with a model
python3 scripts/ollama.py chat qwen3:4b "What is the capital of France?"

# Chat with system prompt
python3 scripts/ollama.py chat llama3.1:8b "Review this code" -s "You are a code reviewer"

# Generate completion (non-chat)
python3 scripts/ollama.py generate qwen3:4b "Once upon a time"

# Get embeddings
python3 scripts/ollama.py embed bge-m3 "Text to embed"

Model Selection

See references/models.md for full model list and selection guide.

Quick picks:

  • Fast answers: qwen3:4b
  • Coding: qwen2.5-coder:7b
  • General: llama3.1:8b
  • Reasoning: deepseek-r1:8b

Tool Use

Some local models support function calling. Use ollama_tools.py:

# Single request with tools
python3 scripts/ollama_tools.py single qwen2.5-coder:7b "What's the weather in Amsterdam?"

# Full tool loop (model calls tools, gets results, responds)
python3 scripts/ollama_tools.py loop qwen3:4b "Search for Python tutorials and summarize"

# Show available example tools
python3 scripts/ollama_tools.py tools

Tool-capable models: qwen2.5-coder, qwen3, llama3.1, mistral

OpenClaw Sub-Agents

Spawn local model sub-agents with sessions_spawn:

# Example: spawn a coding agent
sessions_spawn(
    task="Review this Python code for bugs",
    model="ollama/qwen2.5-coder:7b",
    label="code-review"
)

Model path format: ollama/<model-name>

Parallel Agents (Think Tank Pattern)

Spawn multiple local agents for collaborative tasks:

agents = [
    {"label": "architect", "model": "ollama/gemma3:12b", "task": "Design the system architecture"},
    {"label": "coder", "model": "ollama/qwen2.5-coder:7b", "task": "Implement the core logic"},
    {"label": "reviewer", "model": "ollama/llama3.1:8b", "task": "Review for bugs and improvements"},
]

for a in agents:
    sessions_spawn(task=a["task"], model=a["model"], label=a["label"])

Direct API

For custom integrations, use the Ollama API directly:

# Chat
curl $OLLAMA_HOST/api/chat -d '{
  "model": "qwen3:4b",
  "messages": [{"role": "user", "content": "Hello"}],
  "stream": false
}'

# Generate
curl $OLLAMA_HOST/api/generate -d '{
  "model": "qwen3:4b",
  "prompt": "Why is the sky blue?",
  "stream": false
}'

# List models
curl $OLLAMA_HOST/api/tags

# Pull model
curl $OLLAMA_HOST/api/pull -d '{"name": "phi3:mini"}'

Troubleshooting

Connection refused?

  • Check Ollama is running: ollama serve
  • Verify OLLAMA_HOST is correct
  • For remote servers, ensure firewall allows port 11434

Model not loading?

  • Check VRAM: larger models may need CPU offload
  • Try a smaller model first

Slow responses?

  • Model may be running on CPU
  • Use smaller quantization (e.g., :7b instead of :30b)

OpenClaw sub-agent falls back to default model?

  • Ensure ollama:default auth profile exists in OpenClaw config
  • Check model path format: ollama/modelname:tag

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…