Lora Finetune

v1.0.0

LoRA fine-tuning pipeline for Stable Diffusion on Apple Silicon — dataset prep, training, evaluation with LLM-as-judge scoring. Use when fine-tuning image ge...

0· 393· 1 versions· 1 current· 1 all-time· Updated 10h ago· MIT-0
byNissan Dookeran@nissan

Install

openclaw skills install lora-finetune

LoRA Fine-Tuning (Apple Silicon)

Train custom LoRA adapters for Stable Diffusion 1.5 on Mac hardware. Tested on M4 24GB — produces 3.1MB weight files in ~15 minutes at 500 steps.

Hardware Requirements

ConfigModelResolutionVRAM
M4 24GBSD 1.5512×512✅ Works
M4 24GBSDXL512×512⚠️ Tight, may OOM
M4 24GBFLUX.1-schnellAny❌ OOMs
M4 Pro 48GBSDXL1024×1024✅ Estimated

Training Pipeline

  1. Prepare dataset: 15-25 images in consistent style, 512×512, with text captions
  2. Train LoRA: 500 steps, learning rate 1e-4, rank 4
  3. Evaluate: Generate test images, compare base vs LoRA vs reference (Gemini/DALL-E)
  4. Score: LLM-as-judge rates each on style consistency, quality, prompt adherence

Quick Start

# Prepare training images in a folder
ls training_data/
# image_001.png  image_001.txt  image_002.png  image_002.txt ...

# Train (see scripts/train_lora.py for full options)
python3 scripts/train_lora.py \
  --data_dir ./training_data \
  --output_dir ./lora_weights \
  --steps 500 \
  --lr 1e-4 \
  --rank 4

Evaluation with LLM-as-Judge

# Compare base model vs LoRA vs commercial (Gemini/DALL-E)
# Pixtral Large scores each image 1-10 on:
# - Style consistency with training data
# - Image quality and coherence
# - Prompt adherence

# Our results: Base 6.8 → LoRA 9.0 → Gemini 9.5
# Lesson: Gemini wins without training, but LoRA closes the gap significantly

Key Lessons

  • float32 required on MPS — float16 silently produces NaN on Apple Silicon for SD pipelines
  • mflux is faster than PyTorch MPS for FLUX (~105s vs ~90min) but doesn't support LoRA training
  • SD 1.5 is the ceiling for 24GB — FLUX LoRA OOMs even with gradient checkpointing
  • 15-25 images is the sweet spot — fewer undertrain, more doesn't help proportionally
  • Gemini (Imagen 4.0) beats fine-tuned SD 1.5 with zero training — use commercial APIs for production, LoRA for experimentation and offline use

Files

  • scripts/train_lora.py — Training script with Apple Silicon MPS support
  • scripts/compare_models.py — LLM-as-judge evaluation comparing base vs LoRA vs reference

Version tags

latestvk97djg8p5tr7j22ve833sqmkq182284e

Runtime requirements

🎨 Clawdis
Binspython3
EnvHF_TOKEN
Primary envHF_TOKEN