Ollama Herd
v1.5.2Ollama multimodal model router for Llama, Qwen, DeepSeek, Phi, and Mistral — plus mflux image generation, speech-to-text, and embeddings. Self-hosted Ollama...
⭐ 0· 151·2 current·2 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description describe an Ollama fleet router and the SKILL.md contains exactly the commands and API endpoints you'd expect for managing a local Ollama router (curl to localhost:11435, pip-installable Python package, herd/ herd-node commands). Required binaries (curl/wget, optional python3/pip/sqlite3) align with the documented workflows.
Instruction Scope
Instructions are scoped to interacting with a local Ollama Herd HTTP API (status, pull/delete models, settings, dashboard). The SKILL.md does not instruct reading unrelated system files or environment secrets, nor sending data to remote endpoints other than localhost. It does reference dashboard and DB/log paths in metadata but does not instruct broad file/system access.
Install Mechanism
There is no platform install spec in the registry (instruction-only), but the SKILL.md recommends pip install from PyPI (ollama-herd). Installing a third-party PyPI package is a normal way to obtain this tool, but it carries the usual supply-chain risk — users should verify the package/source (owner: geeks-accelerator on GitHub/PyPI) before installing.
Credentials
The skill does not request environment variables, credentials, or unrelated config paths. The metadata lists configPaths (~/.fleet-manager/latency.db and logs) which are consistent with a fleet manager but do not, by themselves, demand additional secrets or elevated access.
Persistence & Privilege
always is false and the skill is user-invocable (normal). There is no indication it modifies other skills or requests permanent system-level privileges. The primary risk is operational: the skill's documented API endpoints can manage models (pull/delete) and settings on any local Ollama Herd instance the agent can reach.
Assessment
This skill appears coherent with its stated purpose, but take these precautions before installing/using it:
- Verify the PyPI package and GitHub repo (geeks-accelerator/ollama-herd) to ensure the code is trustworthy before running pip install.
- The skill talks to http://localhost:11435 and includes endpoints that can pull or delete models and change settings — ensure that the Ollama Herd service is intended to be managed by this agent and that the service is bound to localhost or otherwise access-controlled. If the service is exposed externally without auth, the agent (or other processes) could modify your fleet.
- Confirm you want an AI agent to be able to invoke these management operations autonomously. While autonomous invocation is platform-default, permitting it means the skill could issue model pulls/deletes when the agent runs; restrict invocation if you want manual control.
- If you are cautious about supply-chain risk, manually inspect the PyPI package source code or run it in an isolated environment before giving it access to your production devices.
Overall, the skill is internally consistent and does not request unrelated secrets or system access, but exercise standard caution around third-party package installation and local fleet management privileges.Like a lobster shell, security has layers — review code before you run it.
apple-siliconvk9731482k7h8mcza1eqe350k21841bp0dashboardvk970v0gdb4fcyfnkkqn88vbpzd83wcpxdeepseekvk9731482k7h8mcza1eqe350k21841bp0embeddingsvk9731482k7h8mcza1eqe350k21841bp0fleet-managementvk9731482k7h8mcza1eqe350k21841bp0fleet-routingvk970v0gdb4fcyfnkkqn88vbpzd83wcpximage-generationvk9731482k7h8mcza1eqe350k21841bp0inference-routingvk970v0gdb4fcyfnkkqn88vbpzd83wcpxlatestvk978q3prh2n60qx8kf5zkht7r5844cphllamavk9731482k7h8mcza1eqe350k21841bp0local-aivk979t0zgfzbj0efqe2zpm65c3183y8kklocal-llmvk970v0gdb4fcyfnkkqn88vbpzd83wcpxmac-studiovk9731482k7h8mcza1eqe350k21841bp0mfluxvk970v0gdb4fcyfnkkqn88vbpzd83wcpxmistralvk9731482k7h8mcza1eqe350k21841bp0model-managementvk9791trvw3dhr3ejpvnx0ys69n83e6nqmodel-routervk9731482k7h8mcza1eqe350k21841bp0monitoringvk9791trvw3dhr3ejpvnx0ys69n83e6nqmultimodalvk970v0gdb4fcyfnkkqn88vbpzd83wcpxmultimodal-routervk9731482k7h8mcza1eqe350k21841bp0ollamavk9731482k7h8mcza1eqe350k21841bp0phivk9731482k7h8mcza1eqe350k21841bp0qwenvk9731482k7h8mcza1eqe350k21841bp0qwen-asrvk970v0gdb4fcyfnkkqn88vbpzd83wcpxself-hostedvk979t0zgfzbj0efqe2zpm65c3183y8kkspeech-to-textvk9731482k7h8mcza1eqe350k21841bp0
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
llama Clawdis
OSmacOS · Linux · Windows
Any bincurl, wget
