Lora Finetune

v1.0.0

LoRA fine-tuning pipeline for Stable Diffusion on Apple Silicon — dataset prep, training, evaluation with LLM-as-judge scoring. Use when fine-tuning image ge...

0· 339·1 current·1 all-time
byNissan Dookeran@nissan
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (LoRA fine-tuning for Stable Diffusion on Apple Silicon) match the included training and comparison scripts and the declared need for HF_TOKEN to download models. Minor inconsistency: SKILL.md discusses an "LLM-as-judge" scoring step (Pixtral/Gemini comparisons) but the included scripts do not implement any LLM calls or automated scoring — only image generation and side-by-side comparison are implemented.
Instruction Scope
SKILL.md instructs running the provided Python scripts against a local training_data folder; the scripts read only image and .txt caption files from that folder, load models from the Hub, train and save LoRA weights locally, and write logs/images to local output dirs. There are no instructions to read unrelated host files, send training data to external endpoints, or access system configuration beyond standard file I/O.
Install Mechanism
Instruction-only install (no install spec). This is low-risk because nothing is auto-downloaded or written by an installer; however the runtime requires Python packages (torch, diffusers, peft, PIL) which must be installed by the user — the skill does not provide an automated install script for those dependencies.
Credentials
The only required environment credential is HF_TOKEN (declared as primaryEnv). This is appropriate and expected for downloading models from Hugging Face Hub; the scripts do not attempt to read other environment variables or unrelated credentials. Note: the scripts rely on diffusers.from_pretrained which will use HF_TOKEN to access the Hub, so providing HF_TOKEN is necessary for private or gated models.
Persistence & Privilege
The skill is not always-enabled and does not request any elevated or persistent platform privileges. It only writes its own output files (weights, images, logs) to local directories specified by the user.
Assessment
This skill appears coherent for local LoRA fine-tuning. Things to consider before installing/running: - HF_TOKEN: provide a Hugging Face token scoped appropriately (prefer read-only if possible). The token is used to download models from the Hub. - Dependencies: you must install torch, diffusers, peft, and PIL (and any Apple Silicon-specific builds) yourself; the skill does not include an automated installer. - LLM scoring mismatch: the README mentions an "LLM-as-judge" evaluation, but the provided scripts do not perform any LLM calls — if you expect automated scoring, you will need additional code or instructions. - Model provenance & licensing: verify the model IDs (e.g., FLUX.1-schnell, stable-diffusion-v1-5) and their licenses before downloading or fine-tuning, especially if you will redistribute results. - Resource use: training may be memory- and time-intensive; test with small steps and small datasets first and monitor MPS/CPU memory. - Safety: the scripts operate on local files only and save outputs locally, but always inspect third-party model IDs and avoid supplying sensitive data in training captions. Overall, the skill looks internally consistent and appropriate for its stated purpose, with moderate non-security caveats about missing LLM scoring and explicit dependency installation instructions.

Like a lobster shell, security has layers — review code before you run it.

latestvk97djg8p5tr7j22ve833sqmkq182284e

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🎨 Clawdis
Binspython3
EnvHF_TOKEN
Primary envHF_TOKEN

Comments