Ollama Local
Analysis
This skill appears purpose-aligned for Ollama management, but it can change your Ollama model list and send prompts to whatever Ollama server you configure.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
api_request("/api/delete", method="DELETE", data={"name": model_name})The helper can delete an Ollama model from the configured server. This is disclosed model-management behavior, but it changes local or remote model state.
for chunk in api_stream("/api/pull", {"name": model_name}):The pull helper asks Ollama to download a named model. This is central to the skill, but model provenance and trust are delegated to Ollama and the chosen model tag.
Spawn local model sub-agents with `sessions_spawn` ... for a in agents:
sessions_spawn(task=a["task"], model=a["model"], label=a["label"])The documentation shows spawning one or more local Ollama sub-agents. This is disclosed and purpose-aligned, but it delegates task context to additional agents.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
export OLLAMA_HOST="http://localhost:11434" # Or for remote server: export OLLAMA_HOST="http://192.168.1.100:11434"
The skill explicitly supports a remote Ollama host. Chat prompts, system prompts, and embedding text are sent to the configured host.
