Stable Diffusion Sd3

v1.0.2

Stable Diffusion 3 and SD3.5 Large on Apple Silicon — generate Stable Diffusion images locally with DiffusionKit's MLX-native backend. SD3 Medium for fast St...

2· 202·2 current·2 all-time
byTwin Geeks@twinsgeeks

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for twinsgeeks/stable-diffusion-sd3.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Stable Diffusion Sd3" (twinsgeeks/stable-diffusion-sd3) from ClawHub.
Skill page: https://clawhub.ai/twinsgeeks/stable-diffusion-sd3
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install stable-diffusion-sd3

ClawHub CLI

Package manager switcher

npx clawhub@latest install stable-diffusion-sd3
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (local Stable Diffusion on Apple Silicon, fleet routing) align with the instructions: examples use a local router on http://localhost:11435, recommend installing ollama-herd, diffusionkit, and mflux. Declared required binaries (curl/wget, optional python3/pip) match the documented commands. The metadata's configPaths (~/.fleet-manager/...) are consistent with a fleet router but are not surprising for this purpose.
Instruction Scope
SKILL.md stays within purpose: it instructs installing a fleet router (herd/herd-node), installing backends (DiffusionKit, mflux), and calling local HTTP endpoints for image generation/monitoring. It does not ask to read unrelated user data or external secrets. Note: the instructions require downloading model weights (HuggingFace) and running install/patch scripts—these are expected for model usage but involve substantial network I/O and running third‑party code.
Install Mechanism
The skill is instruction-only (no install spec). However, the guide tells users to run pip install ollama-herd and uv tool install diffusionkit, which will fetch and execute third‑party packages/binaries at install time. This is normal for such tooling but increases the attack surface compared to a purely local-only script; users should verify the provenance of those packages and scripts.
Credentials
The skill requests no environment variables or credentials. All runtime interactions are local (localhost) or involve downloading model weights from known model hosts (HuggingFace) as part of normal operation. There are no unrelated secret requests.
Persistence & Privilege
always:false and no special privileges requested. The skill does not instruct modifying other skills or system-wide configurations beyond installing tools for the router and node components; autonomous invocation is allowed but that is the platform default.
Assessment
This skill appears internally consistent with its goal of running Stable Diffusion locally, but before installing or running anything: (1) review the PyPI package 'ollama-herd' source and any 'uv tool' provider to ensure you trust them; (2) inspect any provided patch scripts (e.g., patch-diffusionkit-macos26.sh) before executing; (3) expect large downloads (2–8GB model weights) and significant RAM usage—run on an isolated or well-backed-up machine if concerned; (4) the router opens a local HTTP port (11435) — confirm it is bound only to localhost or properly firewalled if you do not want other LAN hosts to access it; (5) if you use private HuggingFace assets, verify whether authentication is needed and handle tokens separately; and (6) consider running installs in a virtualenv or dedicated environment to limit accidental system-wide changes.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

art Clawdis
OSmacOS
Any bincurl, wget
latestvk97bh8xr1yq5wvfqpjycb168b1845s85
202downloads
2stars
3versions
Updated 3w ago
v1.0.2
MIT-0
macOS

Stable Diffusion 3 — Local Image Generation on Your Fleet

Run Stable Diffusion 3 Medium and Stable Diffusion 3.5 Large (SD3.5) on your own Apple Silicon hardware. DiffusionKit provides MLX-native Stable Diffusion inference — no CUDA, no cloud, no per-image costs. The fleet router picks the best device for every Stable Diffusion generation request.

Stable Diffusion Supported Models

Stable Diffusion ModelBackendSpeed (M3 Ultra)Peak RAMQuality
SD3 MediumDiffusionKit~9s (512px)3.5GBGood — fast Stable Diffusion iterations
SD3.5 LargeDiffusionKit~67s (512px)11.6GBHighest — Stable Diffusion with T5 encoder
z-image-turbomflux~7s (512px)4GBGood — fastest option
flux-devmflux~30s (1024px)6GBHigh — detailed output
x/z-image-turboOllama native~19s (1024px)12GBGood — experimental

Stable Diffusion Setup

pip install ollama-herd    # Stable Diffusion fleet router from PyPI
herd                       # start the Stable Diffusion router (port 11435)
herd-node                  # run on each device — finds the router for Stable Diffusion routing

Install DiffusionKit for Stable Diffusion models

uv tool install diffusionkit    # Stable Diffusion 3 and SD3.5 backend

macOS 26 users: Apply a one-time patch for Stable Diffusion compatibility:

./scripts/patch-diffusionkit-macos26.sh

First Stable Diffusion run downloads model weights from HuggingFace (~2-8GB depending on SD3 model). No models are downloaded during installation — all Stable Diffusion pulls are user-initiated.

Install mflux for Flux models (optional, recommended alongside Stable Diffusion)

uv tool install mflux

The router prefers mflux over Ollama native for shared models to avoid evicting LLMs from memory during Stable Diffusion workloads.

Generate Stable Diffusion Images

Stable Diffusion 3 Medium (fast SD3 generation)

curl -o sd3_cityscape.png http://localhost:11435/api/generate-image \
  -H "Content-Type: application/json" \
  -d '{"model": "sd3-medium", "prompt": "Stable Diffusion rendering a futuristic cityscape at dusk", "width": 1024, "height": 1024, "steps": 20}'

Stable Diffusion 3.5 Large (highest quality SD3)

curl -o sd3_portrait.png http://localhost:11435/api/generate-image \
  -H "Content-Type: application/json" \
  -d '{"model": "sd3.5-large", "prompt": "Stable Diffusion oil painting portrait, dramatic lighting", "width": 1024, "height": 1024, "steps": 30}'

Stable Diffusion Python Integration

import httpx

def generate_stable_diffusion(prompt, model="sd3-medium", width=1024, height=1024):
    """Generate an image using Stable Diffusion SD3 via the fleet router."""
    sd3_response = httpx.post(
        "http://localhost:11435/api/generate-image",
        json={"model": model, "prompt": prompt, "width": width, "height": height, "steps": 20},
        timeout=180.0,
    )
    sd3_response.raise_for_status()
    return sd3_response.content  # Stable Diffusion PNG bytes

# Quick Stable Diffusion iteration with SD3 Medium
sd3_png = generate_stable_diffusion("a robot painting a sunset in Stable Diffusion style")
with open("stable_diffusion_output.png", "wb") as f:
    f.write(sd3_png)

Stable Diffusion Parameters

SD3 ParameterDefaultDescription
model(required)sd3-medium, sd3.5-large, z-image-turbo, flux-dev, flux-schnell
prompt(required)Stable Diffusion text description of the image
width1024Stable Diffusion image width in pixels
height1024Stable Diffusion image height in pixels
steps4Stable Diffusion inference steps (20-30 recommended for SD3)
guidance(model default)Stable Diffusion guidance scale
seed(random)Seed for reproducible Stable Diffusion output
negative_prompt""What to avoid in Stable Diffusion generation

Monitor Stable Diffusion Generation

# Stable Diffusion generation stats (last 24h)
curl -s http://localhost:11435/dashboard/api/image-stats | python3 -m json.tool

# Which nodes have Stable Diffusion models
curl -s http://localhost:11435/fleet/status | python3 -c "
import sys, json
# Stable Diffusion node inspection
for n in json.load(sys.stdin).get('nodes', []):
    img = n.get('image', {})
    if img:
        sd3_models = [m['name'] for m in img.get('models_available', [])]
        print(f'{n[\"node_id\"]}: {sd3_models}')
"

Web dashboard at http://localhost:11435/dashboard — Stable Diffusion queues show with [IMAGE] badge alongside LLM queues.

Also Available on This Fleet

LLM inference alongside Stable Diffusion

Llama 3.3, Qwen 3.5, DeepSeek-V3, DeepSeek-R1 — any Ollama model through the same router that handles Stable Diffusion.

Speech-to-text

curl http://localhost:11435/api/transcribe -F "file=@recording.wav" -F "model=qwen3-asr"

Embeddings

curl http://localhost:11435/api/embed \
  -d '{"model": "nomic-embed-text", "input": "Stable Diffusion 3 image generation on Apple Silicon"}'

Full Stable Diffusion Documentation

Contribute

Ollama Herd is open source (MIT). We welcome contributions from both humans and AI agents:

  • GitHub — star the repo, open issues, submit PRs
  • 444 tests, fully async Python, Pydantic v2 models
  • CLAUDE.md provides full context for AI agents

Stable Diffusion Guardrails

  • No automatic downloads — Stable Diffusion model weights are downloaded on first use, not during installation. All SD3 pulls require user confirmation.
  • Stable Diffusion model deletion requires explicit user confirmation.
  • Never delete or modify files in ~/.fleet-manager/ (contains Stable Diffusion routing data).
  • All Stable Diffusion requests stay local — no data leaves your network.

Comments

Loading comments...