Mac Mini AI — Mac Mini Local LLM, Image Gen, STT on Apple Silicon

v1.0.1

Mac Mini AI — run LLMs, image generation, speech-to-text, and embeddings on your Mac Mini. M4 (16-32GB) and M4 Pro (24-64GB) configurations make the Mac Mini...

2· 104·2 current·2 all-time
byTwin Geeks@twinsgeeks

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for twinsgeeks/mac-mini-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Mac Mini AI — Mac Mini Local LLM, Image Gen, STT on Apple Silicon" (twinsgeeks/mac-mini-ai) from ClawHub.
Skill page: https://clawhub.ai/twinsgeeks/mac-mini-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install mac-mini-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install mac-mini-ai
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill is an instruction-only guide for running a local fleet (herd/herd-node) and references installing a PyPI package (ollama-herd), running local HTTP endpoints, and using local model tooling — which matches the stated purpose. However, metadata marks python3/pip as optional while SKILL.md explicitly instructs 'pip install ollama-herd', so pip should be required; this mismatch is a minor coherence issue.
Instruction Scope
Instructions stick to setting up a local service, using local HTTP endpoints (localhost:11435), and interacting with models. They do not instruct reading unrelated system files or exporting data. Small scope ambiguities: the fleet 'automatic discovery' mechanism is not described (it may perform LAN discovery/multicast or open ports), and the 'uv tool install mflux' command is not explained (the 'uv' tool is undefined here). Also several examples call python3 utilities even though python3/pip were listed optional.
Install Mechanism
This is an instruction-only skill (no install spec), so it doesn't install code itself. It instructs the user to pip install a package from PyPI, which is a normal way to install CLI tools but does execute third-party code on install — users should inspect the package/repo before running. No arbitrary download URLs or archive extraction are embedded in the skill.
Credentials
The skill does not request environment variables or secret credentials and only references local config paths (~/.fleet-manager/*) which are relevant to the fleet. That proportionality is appropriate for the described functionality.
Persistence & Privilege
The skill does not request always:true, does not request system-wide changes in its instructions, and is user-invocable only. The skill starts local services (herd/herd-node) which is expected for this purpose but carries the usual runtime privilege of any local server process.
Assessment
This skill is largely coherent with its purpose, but check a few things before installing: 1) The SKILL.md tells you to run 'pip install ollama-herd' — inspect the PyPI package and its GitHub repo (https://github.com/geeks-accelerator/ollama-herd) to ensure the code matches expectations. 2) Confirm how the fleet 'discovery' works (it may open LAN ports or use multicast); if you need to limit network exposure, run in an isolated network or firewall the service. 3) The metadata lists python3/pip as optional even though examples use them — ensure you have a safe Python environment (virtualenv) before installing. 4) The 'uv tool install mflux' command is not explained here — verify what 'uv' is and where that tool comes from. 5) Running the herd will start local servers on port 11435 — review the service config and keep model downloads/installs manual as recommended. If these checks look good, the skill appears to do what it claims; otherwise treat it cautiously and inspect the code before installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

computer Clawdis
OSmacOS
Any bincurl, wget
affordable-aivk9729wcvg0z4f6qdpvp45ta34n83zt20apple-siliconvk9729wcvg0z4f6qdpvp45ta34n83zt20embeddingvk9729wcvg0z4f6qdpvp45ta34n83zt20fleetvk9729wcvg0z4f6qdpvp45ta34n83zt20homelabvk9729wcvg0z4f6qdpvp45ta34n83zt20latestvk9729wcvg0z4f6qdpvp45ta34n83zt20llmvk9729wcvg0z4f6qdpvp45ta34n83zt20local-aivk9729wcvg0z4f6qdpvp45ta34n83zt20m4vk9729wcvg0z4f6qdpvp45ta34n83zt20m4-provk9729wcvg0z4f6qdpvp45ta34n83zt20mac-clustervk9729wcvg0z4f6qdpvp45ta34n83zt20mac-minivk9729wcvg0z4f6qdpvp45ta34n83zt20ollamavk9729wcvg0z4f6qdpvp45ta34n83zt20phi4vk9729wcvg0z4f6qdpvp45ta34n83zt20self-hostedvk9729wcvg0z4f6qdpvp45ta34n83zt20
104downloads
2stars
2versions
Updated 4w ago
v1.0.1
MIT-0
macOS

Mac Mini AI — The $599 AI Node

The Mac Mini is the most cost-effective hardware for local AI. Starting at $599 with 16GB of unified memory, it runs 7B-14B models comfortably. Stack three Mac Minis for the cost of one month of cloud GPU rental — and they run forever with zero ongoing costs.

This skill turns one Mac Mini into an AI server and multiple Mac Minis into a fleet.

Mac Mini configurations for AI

ConfigChipUnified MemoryPriceLLM Sweet Spot
Mac Mini M4 (16GB)M416GB$5993B-7B models (phi4-mini, llama3.2:3b)
Mac Mini M4 (24GB)M424GB$7997B-14B models (phi4, gemma3:12b)
Mac Mini M4 (32GB)M432GB$99914B-22B models (qwen3:14b, codestral)
Mac Mini M4 Pro (48GB)M4 Pro48GB$1,39922B-32B models (qwen3:32b)
Mac Mini M4 Pro (64GB)M4 Pro64GB$1,79932B-70B models (llama3.3:70b quantized)

The Mac Mini fleet strategy

Three Mac Minis (32GB each) for $3,000 give you:

  • 96GB total unified memory across the fleet
  • Each runs a different model simultaneously
  • The router picks the best device for every request
  • $0/month after purchase — no cloud API costs
Mac Mini #1 (32GB) — llama3.3:70b (quantized)  ─┐
Mac Mini #2 (32GB) — codestral + phi4            ├──→  Router  ←──  Your apps
Mac Mini #3 (32GB) — qwen3:14b + embeddings     ─┘

Setup

pip install ollama-herd    # PyPI: https://pypi.org/project/ollama-herd/

On one Mac Mini (the router):

herd

On every other Mac Mini:

herd-node

Devices discover each other automatically. No IP configuration, no Docker, no Kubernetes.

Use your Mac Mini

Chat with an LLM

from openai import OpenAI

client = OpenAI(base_url="http://localhost:11435/v1", api_key="not-needed")
response = client.chat.completions.create(
    model="phi4",
    messages=[{"role": "user", "content": "Write a Python web scraper"}],
    stream=True,
)
for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Ollama API

curl http://localhost:11435/api/chat -d '{
  "model": "gemma3:12b",
  "messages": [{"role": "user", "content": "Explain recursion simply"}],
  "stream": false
}'

Image generation (optional)

uv tool install mflux    # Install on any Mac Mini
curl -o art.png http://localhost:11435/api/generate-image \
  -H "Content-Type: application/json" \
  -d '{"model": "z-image-turbo", "prompt": "a stack of Mac Minis glowing", "width": 512, "height": 512}'

Speech-to-text

curl http://localhost:11435/api/transcribe -F "file=@meeting.wav" -F "model=qwen3-asr"

Embeddings for RAG

curl http://localhost:11435/api/embed \
  -d '{"model": "nomic-embed-text", "input": "Mac Mini home server local AI"}'

Best models for Mac Mini

RAMBest modelsWhy
16GBphi4-mini (3.8B), gemma3:4b, nomic-embed-textSmall but capable, leaves room for OS
24GBphi4 (14B), gemma3:12b, codestralSweet spot for single-model use
32GBqwen3:14b, deepseek-r1:14b, codestral + phi4-miniTwo models simultaneously
48GBqwen3:32b, deepseek-r1:32bLarger models, great quality
64GBllama3.3:70b (quantized)Near-frontier quality on a Mac Mini

Monitor your Mac Mini fleet

Dashboard at http://localhost:11435/dashboard — see every Mac Mini's status, loaded models, and queue depths.

# Fleet overview
curl -s http://localhost:11435/fleet/status | python3 -m json.tool

# Model recommendations for your hardware
curl -s http://localhost:11435/dashboard/api/recommendations | python3 -m json.tool

Works with any OpenAI-compatible tool

ToolConnection
Open WebUIOllama URL: http://mac-mini-ip:11435
Aideraider --openai-api-base http://mac-mini-ip:11435/v1
Continue.devBase URL: http://mac-mini-ip:11435/v1
LangChainChatOpenAI(base_url="http://mac-mini-ip:11435/v1")

Full documentation

Contribute

Ollama Herd is open source (MIT). Built for the Mac Mini fleet community:

  • Star on GitHub — help other Mac Mini owners find us
  • Open an issue — share your Mac Mini fleet setup
  • PRs welcome from humans and AI agents. CLAUDE.md gives full context.
  • Running a Mac Mini cluster? We'd love to hear about it.

Guardrails

  • No automatic downloads — model pulls require explicit user confirmation.
  • Model deletion requires explicit user confirmation.
  • All requests stay local — no data leaves your network.
  • Never delete or modify files in ~/.fleet-manager/.

Comments

Loading comments...