Ollama Local

v1.1.0

Manage and use local Ollama models. Use for model management (list/pull/remove), chat/completions, embeddings, and tool-use with local LLMs. Covers OpenClaw sub-agent integration and model selection guidance.

10· 5.4k·47 current·50 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description align with included files: scripts implement model listing, pulling/removal, chat/generate/embeddings, and a tool-use loop. All requested capabilities are coherent with a local Ollama integration.
Instruction Scope
SKILL.md and scripts stick to the Ollama HTTP API and the skill's own scripts. The doc mentions creating an OpenClaw auth profile (`ollama:default`) and shows how to spawn sub-agents; these are guidance items that could lead users to edit OpenClaw config, but the skill does not itself instruct reading arbitrary system files or exfiltrating unrelated data.
Install Mechanism
No install spec or external downloads are present; this is an instruction-only skill with helper scripts. No archives or third-party package installs are performed by the skill.
!
Credentials
Metadata declares no required env vars, but SKILL.md and the scripts expect OLLAMA_HOST (defaulting to http://localhost:11434). That mismatch is a minor inconsistency. More importantly, the scripts will send model inputs and tool interactions to the address in OLLAMA_HOST — if you set that to a remote/untrusted host, user-provided content (and model/tool calls) will be transmitted off-host.
Persistence & Privilege
The skill does not request persistent/always-on privileges and does not modify other skills or system-wide configs itself. It is user-invocable and uses the normal agent invocation model.
Assessment
This skill appears to do what it says: local Ollama model management and tool-enabled inference. Before installing, check these points: (1) The scripts use an OLLAMA_HOST environment variable but the metadata does not declare it — make sure OLLAMA_HOST points to a trusted local host (default) and not an untrusted remote server, because all chat/generate/embed requests (and any tool-call content) will be sent to that host. (2) The SKILL.md suggests adding an OpenClaw auth profile (a harmless placeholder), which may prompt you to edit OpenClaw config — only do that if you understand the change. (3) The included run_code tool is a simulated implementation (it does not execute arbitrary code), but if you adapt the script be careful not to add real remote code execution. (4) There is no installer, so review the Python scripts before running them. If you want higher assurance, request that the publisher declare required env vars (OLLAMA_HOST) in metadata and confirm whether any OpenClaw config changes will be made automatically or must be done manually.

Like a lobster shell, security has layers — review code before you run it.

aivk97ejgapv7zn5bhd4ptbvcvczs80ext7latestvk97ejgapv7zn5bhd4ptbvcvczs80ext7llmvk97ejgapv7zn5bhd4ptbvcvczs80ext7localvk97ejgapv7zn5bhd4ptbvcvczs80ext7modelsvk97ejgapv7zn5bhd4ptbvcvczs80ext7ollamavk97ejgapv7zn5bhd4ptbvcvczs80ext7tool-usevk97ejgapv7zn5bhd4ptbvcvczs80ext7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments