Back to skill
v1.1.0

Ollama Local

BenignClawScan verdict for this skill. Analyzed May 1, 2026, 5:19 AM.

Analysis

This skill appears purpose-aligned for Ollama management, but it can change your Ollama model list and send prompts to whatever Ollama server you configure.

GuidanceThis skill is reasonable to install if you want Ollama model management and local inference helpers. Before use, confirm the Ollama host, avoid sending private data to untrusted remote servers, and treat pull/remove/sub-agent commands as actions that can affect your local resources or model inventory.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Tool Misuse and Exploitation
SeverityLowConfidenceHighStatusNote
scripts/ollama.py
api_request("/api/delete", method="DELETE", data={"name": model_name})

The helper can delete an Ollama model from the configured server. This is disclosed model-management behavior, but it changes local or remote model state.

User impactA mistaken or autonomous remove command could delete an installed model from the selected Ollama server.
RecommendationUse remove commands only for a specific model the user asked to delete, and confirm the intended OLLAMA_HOST before running them.
Agentic Supply Chain Vulnerabilities
SeverityLowConfidenceHighStatusNote
scripts/ollama.py
for chunk in api_stream("/api/pull", {"name": model_name}):

The pull helper asks Ollama to download a named model. This is central to the skill, but model provenance and trust are delegated to Ollama and the chosen model tag.

User impactPulling models can download large external artifacts whose behavior and provenance depend on the selected Ollama model source.
RecommendationPull models from trusted sources and prefer known model names/tags for sensitive workflows.
Rogue Agents
SeverityLowConfidenceHighStatusNote
SKILL.md
Spawn local model sub-agents with `sessions_spawn` ... for a in agents:
    sessions_spawn(task=a["task"], model=a["model"], label=a["label"])

The documentation shows spawning one or more local Ollama sub-agents. This is disclosed and purpose-aligned, but it delegates task context to additional agents.

User impactSub-agents may consume local compute resources and receive task details that the user may not expect to share across multiple agents.
RecommendationSpawn sub-agents only when useful for the user’s task, limit the number of agents, and avoid passing secrets unless necessary.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Insecure Inter-Agent Communication
SeverityLowConfidenceHighStatusNote
SKILL.md
export OLLAMA_HOST="http://localhost:11434"
# Or for remote server:
export OLLAMA_HOST="http://192.168.1.100:11434"

The skill explicitly supports a remote Ollama host. Chat prompts, system prompts, and embedding text are sent to the configured host.

User impactIf OLLAMA_HOST points to a remote server, private prompts or embedded text may leave the local machine, and the examples use plain HTTP.
RecommendationKeep OLLAMA_HOST on localhost for private data, or use only trusted remote Ollama servers and appropriate network protections.