Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Home Lab AI. Home Lab AI本地. Laboratorio IA.

v1.0.1

Home lab AI — turn your spare Macs into a local AI home lab cluster. LLM inference, image generation, speech-to-text, and embeddings across Mac Studio, Mac M...

0· 31·0 current·0 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (home-lab AI router) aligns with the instructions to run a router (herd) and nodes (herd-node) and to expose an OpenAI-compatible API on localhost:11435. Required binaries (curl/wget) and optional python3/pip are plausible. Minor inconsistency: the registry summary listed 'Required config paths: none', while the SKILL.md metadata declares configPaths (~/.fleet-manager/latency.db and logs/herd.jsonl). That mismatch should be clarified but does not on its face contradict the stated purpose.
Instruction Scope
SKILL.md tells the agent/operator to run 'pip install ollama-herd', start 'herd' and 'herd-node', and optionally run 'uv tool install ...'. All of that stays inside the advertised home-lab scope (local discovery via mDNS, local dashboard on :11435, model endpoints). The instructions do implicitly rely on network/mDNS discovery and will open local network ports; they do not attempt to read unrelated host files or request other credentials.
!
Install Mechanism
The skill is instruction-only (no install spec), but SKILL.md explicitly tells users to 'pip install ollama-herd' and to run 'uv tool install ...'. Those steps install third‑party packages on the host. Registry metadata did not declare an install mechanism; that mismatch means the skill could cause users to run installs that are not tracked by the registry. Pip installs (and unknown 'uv' tooling) can run arbitrary code during installation—verify package provenance (PyPI project page / GitHub repo / release signatures) before running.
Credentials
No environment variables or credentials are requested in the registry or SKILL.md. The service accepts any API key for its OpenAI-compatible endpoint (api_key 'not-needed'), which is coherent for a local-only router. Declared config paths are local (~/.fleet-manager/*) and consistent with storing service state; no unrelated credentials are requested.
Persistence & Privilege
The skill does not request 'always: true' and will not be force-included. It does instruct operators to run a long-running service ('herd') that listens on TCP port 11435 and performs mDNS discovery on the LAN. That is expected for a home-lab router but increases attack surface if the host's network is not isolated or firewall-protected (the API accepts any API key by default).
What to consider before installing
Before installing or running this skill: 1) Verify the pip package 'ollama-herd' source (PyPI project page and linked GitHub repo) and inspect the code or release artifacts if possible—pip installs can execute arbitrary code. 2) Run installs in a controlled environment (virtualenv or VM) and review what files are written (look for ~/.fleet-manager/*). 3) Understand it starts a network service listening on port 11435 and uses mDNS for zero-config discovery—ensure you run it only on a trusted/local network and use host firewall rules if you don't want LAN access. 4) Investigate the 'uv tool' commands and where they pull models from before running them. 5) If you want higher assurance, clone and review the GitHub repo referenced in SKILL.md (https://github.com/geeks-accelerator/ollama-herd) or run the software in an isolated environment first.

Like a lobster shell, security has layers — review code before you run it.

edge-aivk977kyts2mhzwv7c17ct818rj583y1k2home-labvk97a9kn7egaens0bv14p6f07cs8407rehomelabvk97a9kn7egaens0bv14p6f07cs8407rehomelab-aivk97a9kn7egaens0bv14p6f07cs8407relatestvk97a9kn7egaens0bv14p6f07cs8407relocal-aivk97a9kn7egaens0bv14p6f07cs8407remac-clustervk97a9kn7egaens0bv14p6f07cs8407remac-minivk97a9kn7egaens0bv14p6f07cs8407remac-studiovk97a9kn7egaens0bv14p6f07cs8407remacbook-provk977kyts2mhzwv7c17ct818rj583y1k2no-cloudvk97a9kn7egaens0bv14p6f07cs8407reno-dockervk977kyts2mhzwv7c17ct818rj583y1k2ollamavk97a9kn7egaens0bv14p6f07cs8407reon-devicevk97a9kn7egaens0bv14p6f07cs8407reself-hostedvk97a9kn7egaens0bv14p6f07cs8407rezero-configvk97a9kn7egaens0bv14p6f07cs8407re

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

house Clawdis
OSmacOS · Linux
Any bincurl, wget

SKILL.md

Home Lab AI — Your Spare Macs Are a Cluster

You have Macs sitting around your home lab. A Mac Mini in the closet. A MacBook Pro on the desk. Maybe a Mac Studio doing light work. Together, your home lab has more compute than most cloud instances — you just need software that treats them as one home lab system.

Ollama Herd turns your home lab into a local AI cluster. One home lab endpoint, zero config, four model types.

What your home lab gets

Mac Mini (32GB)    ─┐
MacBook Pro (64GB)  ├──→  Home Lab Router (:11435)  ←──  Your apps / agents
Mac Studio (256GB) ─┘
  • Home lab LLM inference — Llama, Qwen, DeepSeek, Phi, Mistral, Gemma
  • Home lab image generation — Stable Diffusion 3, Flux, z-image-turbo
  • Home lab speech-to-text — Qwen3-ASR transcription
  • Home lab embeddings — nomic-embed-text, mxbai-embed for RAG

All routed to the best available home lab device automatically.

Home Lab Setup (5 minutes)

On every home lab Mac:

pip install ollama-herd    # Home lab AI router

Pick one home lab Mac as the router:

herd    # starts the home lab router

On every other home lab Mac:

herd-node    # joins the home lab fleet automatically

That's it. Home lab devices discover each other automatically on your local network. No IP addresses, no config files, no Docker, no Kubernetes.

Optional: add home lab image generation

uv tool install mflux           # Flux models (fastest for home labs)
uv tool install diffusionkit    # Stable Diffusion 3/3.5

Use Your Home Lab

Home lab LLM chat

from openai import OpenAI

# Home lab inference client
homelab_client = OpenAI(base_url="http://localhost:11435/v1", api_key="not-needed")
homelab_response = homelab_client.chat.completions.create(
    model="llama3.3:70b",
    messages=[{"role": "user", "content": "How do I set up a home lab NAS?"}],
    stream=True,
)
for chunk in homelab_response:
    print(chunk.choices[0].delta.content or "", end="")

Home lab image generation

curl -o homelab_output.png http://localhost:11435/api/generate-image \
  -H "Content-Type: application/json" \
  -d '{"model": "z-image-turbo", "prompt": "a cozy home lab with servers and RGB lighting", "width": 1024, "height": 1024}'

Home lab transcription

curl http://localhost:11435/api/transcribe -F "file=@homelab_standup.wav" -F "model=qwen3-asr"

Home lab knowledge base

curl http://localhost:11435/api/embed \
  -d '{"model": "nomic-embed-text", "input": "home lab networking and AI inference best practices"}'

How the Home Lab Routes Requests

The home lab router scores each device on 7 signals and picks the best one:

Home Lab SignalWhat it measures
Thermal stateIs the home lab model already loaded (hot) or needs cold-loading?
Memory fitDoes the home lab device have enough RAM for this model?
Queue depthIs the home lab device already busy with other requests?
Wait timeHow long has the home lab request been waiting?
Role affinityBig models prefer big home lab machines, small models prefer small ones
Availability trendIs this home lab device reliably available at this time of day?
Context fitDoes the loaded context window fit the home lab request?

You don't manage any of this. The home lab router handles it.

The Home Lab Dashboard

Open http://localhost:11435/dashboard in your browser — your home lab command center:

  • Home Lab Fleet Overview — see every device, loaded models, queue depths, health
  • Trends — home lab requests per hour, latency, token throughput over 24h-7d
  • Health — 11 automated home lab checks with recommendations
  • Recommendations — optimal home lab model mix per device based on your hardware

Recommended Home Lab Models by Device

Home Lab DeviceRAMStart with
MacBook Air (8GB)8GBphi4-mini, gemma3:1b
Mac Mini (16GB)16GBphi4, gemma3:4b, nomic-embed-text
Mac Mini (32GB)32GBqwen3:14b, deepseek-r1:14b
MacBook Pro (64GB)64GBqwen3:32b, codestral, z-image-turbo
Mac Studio (128GB)128GBllama3.3:70b, qwen3:72b
Mac Studio (256GB)256GBgpt-oss:120b, sd3.5-large

The home lab router's model recommender suggests the optimal mix: GET /dashboard/api/recommendations.

Works with Every Home Lab Tool

The home lab fleet exposes an OpenAI-compatible API. Any tool that works with OpenAI works with your home lab:

ToolHome Lab Connection
Open WebUISet Ollama URL to http://homelab-router:11435
Aideraider --openai-api-base http://homelab-router:11435/v1
Continue.devBase URL: http://homelab-router:11435/v1
LangChainChatOpenAI(base_url="http://homelab-router:11435/v1")
CrewAISet OPENAI_API_BASE=http://homelab-router:11435/v1
Any OpenAI SDKBase URL: http://homelab-router:11435/v1, API key: any string

Full documentation

Contribute

Ollama Herd is open source (MIT) and built by home lab enthusiasts for home lab enthusiasts:

  • Star on GitHub — help other home lab builders find us
  • Open an issue — share your home lab setup, report bugs
  • PRs welcome — from humans and AI agents. CLAUDE.md gives full context.
  • Built by twin brothers in Alaska who run their own home lab Mac fleet.

Home Lab Guardrails

  • No automatic downloads — home lab model pulls require explicit user confirmation. Some models are 70GB+.
  • Home lab model deletion requires explicit user confirmation.
  • All home lab requests stay local — no data leaves your home network.
  • Never delete or modify files in ~/.fleet-manager/ (home lab routing data and logs).
  • No cloud dependencies — your home lab works offline after initial model downloads.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…