Ollama Local
PassAudited by ClawScan on May 1, 2026.
Overview
This skill appears purpose-aligned for Ollama management, but it can change your Ollama model list and send prompts to whatever Ollama server you configure.
This skill is reasonable to install if you want Ollama model management and local inference helpers. Before use, confirm the Ollama host, avoid sending private data to untrusted remote servers, and treat pull/remove/sub-agent commands as actions that can affect your local resources or model inventory.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A mistaken or autonomous remove command could delete an installed model from the selected Ollama server.
The helper can delete an Ollama model from the configured server. This is disclosed model-management behavior, but it changes local or remote model state.
api_request("/api/delete", method="DELETE", data={"name": model_name})Use remove commands only for a specific model the user asked to delete, and confirm the intended OLLAMA_HOST before running them.
Pulling models can download large external artifacts whose behavior and provenance depend on the selected Ollama model source.
The pull helper asks Ollama to download a named model. This is central to the skill, but model provenance and trust are delegated to Ollama and the chosen model tag.
for chunk in api_stream("/api/pull", {"name": model_name}):Pull models from trusted sources and prefer known model names/tags for sensitive workflows.
If OLLAMA_HOST points to a remote server, private prompts or embedded text may leave the local machine, and the examples use plain HTTP.
The skill explicitly supports a remote Ollama host. Chat prompts, system prompts, and embedding text are sent to the configured host.
export OLLAMA_HOST="http://localhost:11434" # Or for remote server: export OLLAMA_HOST="http://192.168.1.100:11434"
Keep OLLAMA_HOST on localhost for private data, or use only trusted remote Ollama servers and appropriate network protections.
Sub-agents may consume local compute resources and receive task details that the user may not expect to share across multiple agents.
The documentation shows spawning one or more local Ollama sub-agents. This is disclosed and purpose-aligned, but it delegates task context to additional agents.
Spawn local model sub-agents with `sessions_spawn` ... for a in agents:
sessions_spawn(task=a["task"], model=a["model"], label=a["label"])Spawn sub-agents only when useful for the user’s task, limit the number of agents, and avoid passing secrets unless necessary.
