Back to skill
Skillv1.0.0

ClawScan security

Lora Finetune · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignMar 1, 2026, 7:29 PM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's code, requirements, and network access are consistent with a local LoRA fine-tuning pipeline that downloads models from the Hugging Face Hub and runs training locally; nothing in the files indicates credential exfiltration or unrelated system access.
Guidance
This skill appears coherent for local LoRA fine-tuning. Things to consider before installing/running: - HF_TOKEN: provide a Hugging Face token scoped appropriately (prefer read-only if possible). The token is used to download models from the Hub. - Dependencies: you must install torch, diffusers, peft, and PIL (and any Apple Silicon-specific builds) yourself; the skill does not include an automated installer. - LLM scoring mismatch: the README mentions an "LLM-as-judge" evaluation, but the provided scripts do not perform any LLM calls — if you expect automated scoring, you will need additional code or instructions. - Model provenance & licensing: verify the model IDs (e.g., FLUX.1-schnell, stable-diffusion-v1-5) and their licenses before downloading or fine-tuning, especially if you will redistribute results. - Resource use: training may be memory- and time-intensive; test with small steps and small datasets first and monitor MPS/CPU memory. - Safety: the scripts operate on local files only and save outputs locally, but always inspect third-party model IDs and avoid supplying sensitive data in training captions. Overall, the skill looks internally consistent and appropriate for its stated purpose, with moderate non-security caveats about missing LLM scoring and explicit dependency installation instructions.

Review Dimensions

Purpose & Capability
noteName/description (LoRA fine-tuning for Stable Diffusion on Apple Silicon) match the included training and comparison scripts and the declared need for HF_TOKEN to download models. Minor inconsistency: SKILL.md discusses an "LLM-as-judge" scoring step (Pixtral/Gemini comparisons) but the included scripts do not implement any LLM calls or automated scoring — only image generation and side-by-side comparison are implemented.
Instruction Scope
okSKILL.md instructs running the provided Python scripts against a local training_data folder; the scripts read only image and .txt caption files from that folder, load models from the Hub, train and save LoRA weights locally, and write logs/images to local output dirs. There are no instructions to read unrelated host files, send training data to external endpoints, or access system configuration beyond standard file I/O.
Install Mechanism
okInstruction-only install (no install spec). This is low-risk because nothing is auto-downloaded or written by an installer; however the runtime requires Python packages (torch, diffusers, peft, PIL) which must be installed by the user — the skill does not provide an automated install script for those dependencies.
Credentials
okThe only required environment credential is HF_TOKEN (declared as primaryEnv). This is appropriate and expected for downloading models from Hugging Face Hub; the scripts do not attempt to read other environment variables or unrelated credentials. Note: the scripts rely on diffusers.from_pretrained which will use HF_TOKEN to access the Hub, so providing HF_TOKEN is necessary for private or gated models.
Persistence & Privilege
okThe skill is not always-enabled and does not request any elevated or persistent platform privileges. It only writes its own output files (weights, images, logs) to local directories specified by the user.