Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Ollama Herd
v1.5.3Ollama multimodal model router for Llama, Qwen, DeepSeek, Phi, and Mistral — plus mflux image generation, speech-to-text, and embeddings. Self-hosted Ollama...
⭐ 0· 208·3 current·3 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description (an Ollama multimodal router/fleet manager) aligns with the SKILL.md content: it documents endpoints for fleet status, model pulls, health checks, etc. Requiring curl/wget and optionally python/pip/sqlite3 is reasonable for a CLI/HTTP-based local manager. However, the registry summary earlier listed no required config paths while the SKILL.md metadata declares configPaths (~/.fleet-manager/latency.db and ~/.fleet-manager/logs/herd.jsonl) — that discrepancy is unexplained and suggests the skill expects access to local files not declared in the registry.
Instruction Scope
SKILL.md instructs the agent and user to pip install a PyPI package (ollama-herd), run herd and herd-node, and call many localhost:11435 endpoints (GET/POST) that can change router settings and manage models. Those actions are coherent with a fleet manager but are powerful: they install software, start services, and perform model pulls/deletes. The instructions reference local config/log paths in metadata although they never explicitly show reading them; this raises scope questions (will the agent read or modify those files?). Also the instructions assume an unauthenticated or local HTTP API at port 11435 — the security model for that API is not specified.
Install Mechanism
There is no formal install spec in the registry (instruction-only), but the SKILL.md instructs users/agents to run pip install ollama-herd from PyPI. Installing a third‑party PyPI package and running its daemons is a moderate-risk action — it executes upstream code on the host. Because the registry provides no pinned release, checksum, or local package bundle, you should verify the PyPI package and upstream source before installing.
Credentials
The skill requests no environment variables or credentials, which is proportionate. It does, however, imply access to local ports and files (the metadata configPaths). The lack of declared required config paths in the registry vs. their presence in SKILL.md metadata is inconsistent and could hide file access expectations. No credentials are requested, but the API endpoints called can affect system state (pull/delete models), so local process permissions are effectively required.
Persistence & Privilege
The skill is not force-enabled (always: false) and permits normal autonomous invocation. It does not request to modify other skills or system-wide agent settings in the registry. The main persistence/privilege concern is operational: installing the package and running 'herd-node' will create long-running services and open local ports — appropriate for a fleet manager but something the user must opt into.
What to consider before installing
This skill appears to be a legitimate Ollama fleet manager but ask yourself and verify the following before installing: 1) The SKILL.md directs 'pip install ollama-herd' and running daemon processes — review the PyPI package and the linked GitHub source (compare code, maintainer, recent activity, and release checksums) before installing. 2) Confirm the localhost API (port 11435) is intended to be unauthenticated or will be bound to loopback only; if it is exposed more widely it could be abused. 3) The SKILL.md metadata lists local config files (~/.fleet-manager/latency.db and logs) even though the registry summary did not — clarify whether the skill will read/write those files and whether they may contain sensitive data. 4) Run initial tests in an isolated VM/container if possible, and back up any existing Ollama or fleet config. 5) If you need higher assurance, request a packaged install spec (pinned version, checksum) or a copy of the package contents so you or a reviewer can inspect what the installed code does (network calls, file I/O, subprocesses). If the maintainer can confirm the security model for the HTTP API and publish a verifiable release, my confidence would increase.Like a lobster shell, security has layers — review code before you run it.
apple-siliconvk9731482k7h8mcza1eqe350k21841bp0dashboardvk970v0gdb4fcyfnkkqn88vbpzd83wcpxdeepseekvk9731482k7h8mcza1eqe350k21841bp0embeddingsvk9731482k7h8mcza1eqe350k21841bp0fleet-managementvk9731482k7h8mcza1eqe350k21841bp0fleet-routingvk970v0gdb4fcyfnkkqn88vbpzd83wcpximage-generationvk9731482k7h8mcza1eqe350k21841bp0inference-routingvk970v0gdb4fcyfnkkqn88vbpzd83wcpxlatestvk977a6tcyj4xn9sq2r5wpjyesn84erkellamavk9731482k7h8mcza1eqe350k21841bp0local-aivk979t0zgfzbj0efqe2zpm65c3183y8kklocal-llmvk970v0gdb4fcyfnkkqn88vbpzd83wcpxmac-studiovk9731482k7h8mcza1eqe350k21841bp0mfluxvk970v0gdb4fcyfnkkqn88vbpzd83wcpxmistralvk9731482k7h8mcza1eqe350k21841bp0model-managementvk9791trvw3dhr3ejpvnx0ys69n83e6nqmodel-routervk9731482k7h8mcza1eqe350k21841bp0monitoringvk9791trvw3dhr3ejpvnx0ys69n83e6nqmultimodalvk970v0gdb4fcyfnkkqn88vbpzd83wcpxmultimodal-routervk9731482k7h8mcza1eqe350k21841bp0ollamavk9731482k7h8mcza1eqe350k21841bp0phivk9731482k7h8mcza1eqe350k21841bp0qwenvk9731482k7h8mcza1eqe350k21841bp0qwen-asrvk970v0gdb4fcyfnkkqn88vbpzd83wcpxself-hostedvk979t0zgfzbj0efqe2zpm65c3183y8kkspeech-to-textvk9731482k7h8mcza1eqe350k21841bp0
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
llama Clawdis
OSmacOS · Linux · Windows
Any bincurl, wget
