Ollama Manager

v1.3.1

Manage Ollama models across your machines — see what's loaded, what's eating disk, what's never used, and what you should pull next. Get AI-powered recommend...

0· 204·3 current·3 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
stale
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (manage Ollama models across machines) matches the instructions: install the ollama-herd toolkit, run a herd router and herd-node, query router endpoints for model lists/disk usage, and query a local SQLite telemetry DB for usage/latency. The metadata's listed bins and configPaths (~/.fleet-manager/latency.db, ~/.fleet-manager/logs/herd.jsonl) are directly relevant to that purpose.
Instruction Scope
The SKILL.md tells the agent to: pip install a third-party package, run herd/herd-node daemons, curl localhost:11435 endpoints, and read/query ~/.fleet-manager/latency.db and logs. Those actions are within scope for fleet management, but they do give the skill access to local telemetry and allow it to trigger network activity (pull/delete models) via the router. The instructions do not request unrelated files, env vars, or remote endpoints beyond the documented local router and recommended PyPI package.
Install Mechanism
There is no formal install spec in the skill bundle (instruction-only). The SKILL.md recommends installing a PyPI package (pip install ollama-herd). Installing a package from PyPI is expected for this functionality but carries the usual risk that arbitrary code will be installed and run on the machine; this is proportionate to the stated purpose but worth vetting before install.
Credentials
The skill does not request environment variables, credentials, or unrelated config paths. The only declared config paths are local telemetry DB and logs used for usage/latency queries, which are appropriate for the stated diagnostics/recommendation features.
Persistence & Privilege
The skill is instruction-only and not marked always:true. It does not request persistent elevated platform privileges or access to other skills' configurations. Running the recommended herd daemon will create a persistent router process, which is normal for this functionality but is outside the skill bundle itself.
Assessment
This skill appears coherent for managing Ollama models, but it recommends installing and running a third-party PyPI package and running a long‑running 'herd' router which will read local telemetry and control pulls/deletes. Before installing or running it: review the ollama-herd PyPI package and its GitHub repo for provenance and code behavior; run installation and the herd-node in an isolated/test environment (or container); confirm the router's network binding and authentication (ensure it doesn't bind to 0.0.0.0 without auth); inspect ~/.fleet-manager/latency.db and logs if they contain sensitive data; and avoid running installs as root. If you cannot review the package, treat the pip install step as higher risk.

Like a lobster shell, security has layers — review code before you run it.

cleanupvk978hhmy2jbfnmjnfs107pqff1840cyhdeepseekvk978hhmy2jbfnmjnfs107pqff1840cyhdeletevk978hhmy2jbfnmjnfs107pqff1840cyhdisk-usagevk978hhmy2jbfnmjnfs107pqff1840cyhlatestvk97d51dec5qkzxfgz0r74z9cys845b7allamavk978hhmy2jbfnmjnfs107pqff1840cyhmistralvk978hhmy2jbfnmjnfs107pqff1840cyhmodel-lifecyclevk978hhmy2jbfnmjnfs107pqff1840cyhmodel-managementvk978hhmy2jbfnmjnfs107pqff1840cyhollamavk978hhmy2jbfnmjnfs107pqff1840cyhphivk978hhmy2jbfnmjnfs107pqff1840cyhpullvk978hhmy2jbfnmjnfs107pqff1840cyhqwenvk978hhmy2jbfnmjnfs107pqff1840cyhrecommendationsvk978hhmy2jbfnmjnfs107pqff1840cyh

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments