MFlux Skill for OpenClaw

v0.1.0

Local image generation using Apple MLX via mflux — FLUX.2 Klein 4B (fast, Apache 2.0) and Z-Image Turbo (quality) models

0· 374·1 current·1 all-time
byPankaj Jain@pjain

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for pjain/mflux.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "MFlux Skill for OpenClaw" (pjain/mflux) from ClawHub.
Skill page: https://clawhub.ai/pjain/mflux
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install pjain/mflux

ClawHub CLI

Package manager switcher

npx clawhub@latest install mflux
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (local image generation on Apple Silicon) match the instructions: install mflux (pip/uv) and run mflux CLI or Python API. Required platform and RAM guidance are appropriate for the claimed capabilities.
Instruction Scope
SKILL.md stays on‑topic (installing and using mflux, model selection, quantization, LoRA, image-to-image workflows). It does implicitly require network access to download models and caches them in ~/.cache/huggingface/hub/ and may require a Hugging Face token for private models — this is expected but worth noting.
Install Mechanism
No install spec in the registry; SKILL.md recommends installing via uv (a third‑party tool) or pip. No direct downloads from unknown URLs or archive extraction are included in the skill files themselves.
Credentials
The skill declares no required environment variables or credentials. The instructions do not request secrets. A possible real‑world need for HF credentials (only for private Hugging Face models) is noted in the SKILL.md behavior and is proportionate to model downloads.
Persistence & Privilege
Skill does not request always:true, does not claim persistent elevated privileges, and is user‑invocable. It does instruct installing third‑party software (mflux) which will persist as any pip/uv install would.
Assessment
This skill appears coherent, but before installing: (1) verify the origin and integrity of the mflux package (PyPI project page or official repo) rather than blindly running installs; (2) confirm what 'uv' is and prefer a controlled environment (virtualenv) when using pip; (3) be prepared for large model downloads and ~8–32GB+ storage usage (models cached in ~/.cache/huggingface/hub/); (4) some models or LoRA files may be third‑party or require a Hugging Face token for private repos—only provide tokens if you trust the model source; (5) review licenses of models you plan to use (SKILL.md lists some as 'Custom'); and (6) avoid using LoRA/safetensors from untrusted sources. If you want higher assurance, ask the publisher for a source repository or official homepage and inspect the mflux project directly before installation.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ekpy5mn3scd7mq9997w47bn82a0rr
374downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

SKILL.md — mflux

Name

mflux

Description

Generate images locally using Apple Silicon via the mflux MLX implementation. Supports FLUX.2-klein-4B (default, fastest 4-step generation, Apache 2.0 licensed) and Z-Image-Turbo (6B, highest quality). All processing is on-device — no cloud, no API keys, no data leaving your Mac.

Requirements

  • Apple Silicon Mac (M1 or later)
  • Python 3.10+
  • macOS 13.5+
  • Recommended: 16GB+ RAM (8GB works with quantization)

Installation

1. Install mflux (via uv - recommended)

uv tool install --upgrade mflux --prerelease=allow

With faster downloads (optional):

uv tool install --upgrade mflux --with hf_transfer --prerelease=allow

2. Alternative: Install via pip

pip install -U mflux

3. Verify installation

mflux-generate --help
mflux-generate-z-image-turbo --help
mflux-generate-flux2 --help

Python API Usage

Quick Start — FLUX.2 Klein 4B (Default, Fastest)

from mflux.models.flux2.variants import Flux2Klein
from mflux.models.common.config import ModelConfig

model = Flux2Klein(model_config=ModelConfig.flux2_klein_4b())
image = model.generate_image(
    prompt="A serene Japanese garden with cherry blossoms, golden afternoon light",
    num_inference_steps=4,  # Only 4 steps needed!
    width=1024,
    height=768,
    seed=42,
)
image.save("garden.png")

Z-Image Turbo (Highest Quality)

from mflux.models.z_image import ZImage
from mflux.models.common.config import ModelConfig

model = ZImage(
    model_config=ModelConfig.z_image_turbo(),
    model_path="filipstrand/Z-Image-Turbo-mflux-4bit",  # 4-bit quantized
)
image = model.generate_image(
    prompt="A majestic eagle soaring over snow-capped mountains at sunset",
    num_inference_steps=9,
    width=1280,
    height=720,
    seed=42,
)
image.save("eagle.png")

With Quantization (Lower RAM)

from mflux.models.flux2.variants import Flux2Klein
from mflux.models.common.config import ModelConfig

model = Flux2Klein(
    model_config=ModelConfig.flux2_klein_4b(),
    quantize=8,  # 8-bit quantization
)
# ... generate image

Image-to-Image

from PIL import Image
from mflux.models.flux2.variants import Flux2Klein

model = Flux2Klein(model_config=ModelConfig.flux2_klein_4b())
image = model.generate_image(
    prompt="Transform into a watercolor painting",
    num_inference_steps=4,
    init_image=Image.open("source.jpg"),
    init_image_strength=0.3,  # 0.0-1.0, higher = more change
)
image.save("watercolor.png")

FLUX.2 Image Editing

from mflux.models.flux2.variants import Flux2KleinEdit
from mflux.models.common.config import ModelConfig

model = Flux2KleinEdit(model_config=ModelConfig.flux2_klein_4b())
image = model.generate_image(
    prompt="Make the person wear sunglasses",
    image_paths=["person.jpg", "sunglasses.jpg"],
    num_inference_steps=4,
    seed=42,
)
image.save("edited.png")

LoRA Support

from mflux.models.flux2.variants import Flux2Klein
from mflux.models.common.config import ModelConfig

model = Flux2Klein(
    model_config=ModelConfig.flux2_klein_4b(),
    lora_paths=["path/to/lora.safetensors"],
    lora_scales=[0.8],
)
# ... generate image

Supported Models

ModelCLI CommandSizeStepsSpeedQualityLicense
FLUX.2-klein-4b (default)mflux-generate-flux24B4⚡ Fastest⭐⭐⭐⭐Apache 2.0
FLUX.2-klein-9bmflux-generate-flux29B4⚡ Fast⭐⭐⭐⭐⭐Apache 2.0
Z-Image-Turbomflux-generate-z-image-turbo6B9⚡ Fast⭐⭐⭐⭐⭐Custom
Z-Image (base)mflux-generate-z-image6B50🐢 Slow⭐⭐⭐⭐⭐Custom
FLUX.2-klein-base-4bmflux-generate-flux24B50🐢 Slowest⭐⭐⭐⭐⭐Apache 2.0
Qwen-Imagemflux-generate-qwen20B20🐢 Slow⭐⭐⭐⭐⭐⭐Custom

CLI Reference

Generate with FLUX.2 Klein

# Default 4B model, 4 steps
mflux-generate-flux2 \
  --prompt "A photorealistic portrait of a wise old sailor" \
  --width 1024 \
  --height 768 \
  --steps 4 \
  --seed 42

# 9B model for higher quality
mflux-generate-flux2 \
  --model flux2-klein-9b \
  --prompt "A cyberpunk cityscape with neon lights" \
  --steps 4

# Base model (non-distilled, more steps)
mflux-generate-flux2 \
  --model flux2-klein-base-4b \
  --prompt "A detailed oil painting of a forest" \
  --steps 50 \
  --guidance 1.5

Generate with Z-Image Turbo

mflux-generate-z-image-turbo \
  --prompt "A minimalist logo design for a coffee shop" \
  --width 1280 \
  --height 720 \
  --steps 9 \
  --seed 42

# With LoRA
mflux-generate-z-image-turbo \
  --prompt "Art nouveau style portrait of a woman" \
  --steps 9 \
  --lora-paths "renderartist/Art-Nouveau-Style" \
  --lora-scales 0.7

With Quantization

mflux-generate-flux2 \
  --prompt "A serene landscape" \
  --quantize 8  # 8-bit quantization (reduces RAM)

# Or 4-bit for lowest RAM
mflux-generate-flux2 \
  --prompt "A serene landscape" \
  --quantize 4

Parameters

ParameterTypeDefaultDescription
promptstrrequiredText description of image
widthint1024Image width in pixels
heightint768Image height in pixels
num_inference_stepsint4 (Klein), 9 (Z-Image)Number of denoising steps
seedintrandomRandom seed for reproducibility
quantizeintNoneQuantization level (4, 8)
guidancefloat1.0 (Klein) / 4.0 (Z-Image)Guidance scale (base models only)
lora_pathslistNoneList of LoRA file paths
lora_scaleslistNoneLoRA blending scales
init_imagePIL.ImageNoneSource image for img2img
init_image_strengthfloat0.3Strength of transformation

Aspect Ratios (Recommended Sizes)

Aspect RatioDimensionsUse Case
1:11024×1024Profile photos, icons
4:31024×768Photo standard
16:91024×576 or 1280×720Landscape, videos
3:4768×1024Portrait orientation
9:16720×1280Mobile vertical
21:91280×550Cinematic widescreen

Performance & RAM Guide

ConfigurationRAMSpeedBest For
FLUX.2-klein-4b, q=8~5 GB~8 sec8GB Macs
FLUX.2-klein-4b, q=4~4 GB~5 secLow RAM
FLUX.2-klein-4b, q=None~8 GB~15 secQuality on 16GB
FLUX.2-klein-9b, q=8~12 GB~20 secBest quality 16GB
Z-Image-Turbo, q=4~5 GB~12 secAll-around 8GB

Model Weights

Models are downloaded automatically on first use:

  • FLUX.2-klein-4b: ~15GB
  • FLUX.2-klein-9b: ~32GB
  • Z-Image-Turbo quantized: ~8GB

Cache location: ~/.cache/huggingface/

Comparison: When to Use Which

Choose FLUX.2-klein-4b when:

  • Speed is priority (4 steps, ~5-8 sec)
  • Apache 2.0 license needed (commercial use)
  • Generating many images fast
  • 8GB+ RAM available

Choose Z-Image-Turbo when:

  • Quality is priority
  • Realism matters most
  • You have 16GB+ RAM
  • Time per image acceptable

Choose FLUX.2-klein-9b when:

  • Best quality from Apache-licensed model
  • 16GB+ RAM available
  • Commercial use required

Error Handling

ErrorCauseFix
OutOfMemoryErrorNot enough RAMUse quantization (q=8, q=4)
ValueError: Model not foundFirst run / cache issue

Comments

Loading comments...