Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lora Pipeline

v1.0.0

Manages end-to-end LoRA training: collects and verifies photos, scrapes datasets, applies quality checks, captions, and trains the LoRA model locally.

0· 245·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for iskwang/lora-pipeline.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lora Pipeline" (iskwang/lora-pipeline) from ClawHub.
Skill page: https://clawhub.ai/iskwang/lora-pipeline
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lora-pipeline

ClawHub CLI

Package manager switcher

npx clawhub@latest install lora-pipeline
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's description (end-to-end LoRA pipeline) matches the instructions and included scripts. However the registry metadata declares no required binaries or env vars while the SKILL.md explicitly depends on runpodctl, ssh/scp, unzip, Python + many Python packages (deepface, opencv, onnxruntime, pandas, PIL), and local ONNX/WD14 tagger models. That mismatch (no declared dependencies vs. heavy toolchain required) is incoherent and will cause failures or implicit network activity to fetch models/tools.
!
Instruction Scope
Runtime instructions include web scraping (browser JS snippets and instructions to bypass SNS login via mirrors), extensive filesystem operations, spawning sub-agents, scp/ssh upload to remote RunPod pods, and automated remote training. The SKILL.md's 'NO DATA INSPECTION/NO CLOUD UPLOAD' guidance is contradictory in places (e.g., it forbids sending images to cloud APIs for verification but instructs uploading datasets to remote pods for training). The agent is instructed to perform network transfers (scp/ssh) and spawn long-running sub-agents which are beyond simple local helper behavior — these are appropriate for training but require clear declared permissions and user consent.
Install Mechanism
There is no install spec (instruction-only), which lowers install risk. But included scripts assume many preinstalled binaries and libraries (accelerate path '/venv/bin/accelerate', runpodctl, system Python packages) and expect model files to exist locally. No mechanism is provided to install or verify those dependencies; this is an operational risk (failures or implicit downloads at runtime).
!
Credentials
The skill requests no declared environment variables or credentials, yet the workflow requires access to the user's SSH key, runpodctl configuration, and possibly local model directories (e.g., tag_batch.py hardcodes '/Users/mini/.openclaw/...'). Hardcoded absolute paths and implicit reliance on SSH keys / known_hosts files are disproportionate to a clean, portable skill design and risk accidental use of personal files or keys. The skill also requires RunPod credits / account access (implied) but doesn't declare or request credentials explicitly.
Persistence & Privilege
The skill is not force-installed (always:false) and follows the normal model-invocation defaults. It uses sub-agents and sessions_spawn as part of its design; this autonomous behavior is expected for long-running training tasks. Nothing in the package attempts to modify other skills or grant itself permanent system-wide privileges.
What to consider before installing
This skill implements a full LoRA training pipeline but is sloppy: it doesn't declare the system tools and Python libs it needs, contains hardcoded paths (e.g., /Users/mini/...), and assumes you have runpodctl/SSH keys and local model files. Before installing or running: 1) Do not run it blindly — inspect and fix absolute paths in tag_batch.py and other scripts. 2) Ensure you understand and consent to uploading datasets to remote RunPod pods and that you control the SSH keys used. 3) Verify required Python packages and ONNX/Wd14 models are installed in known locations, or change the scripts to configurable paths. 4) Confirm you have permission to scrape and use the images (privacy and legal risk). 5) If you expect a small/local-only helper, this skill is overprivileged; if you intend cloud training, validate runpodctl configuration and review the SCP/SSH commands carefully. If you want, provide the missing dependency list and replace hardcoded paths and I'll re-evaluate.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e59pnxqhcv48am7v5vc07q9834rvj
245downloads
0stars
1versions
Updated 22h ago
v1.0.0
MIT-0

LoRA Pipeline

Orchestrates the full LoRA dataset-to-model pipeline. Each phase is self-contained and can be delegated to a sub-agent independently.


Pipeline Overview

Phase 1: 蒐集範例照片   → collect 3–6 reference face photos
Phase 2: 確認人臉正確   → user confirms refs; deepface cross-check
Phase 3: 蒐集 datasets  → scrape web sources guided by face features
Phase 4: 確認照片正確   → face verify + dedup + quality filter + crop
Phase 5: 開始 caption   → WD14 local tagging + trigger word
Phase 6: LoRA training  → RunPod Kohya training → retrieve outputs

Phase Index

PhaseFileCan Sub-AgentModelEst. Time
01 — Reference Collectionphases/01-reference.mdHaiku (Worker)5–10 min
02 — Scrapingphases/02-scraping.mdHaiku (Worker)10–30 min
03 — Verify & Cleanphases/03-verify.mdHaiku (Worker)2–5 min
04 — Captionphases/04-caption.mdHaiku (Worker)1–3 min
05 — Trainingphases/05-training.mdHaiku (Worker) + Sentry15–30 min

To load a specific phase: read skills/lora-pipeline/phases/<phase-file> — each file is independently readable.


Directory Structure

~/.openclaw/workspace/
└── datasets/
    ├── face_references/
    │   └── <lora_name>/          # Phase 1–2: Gold standard refs (3–6 photos)
    │       ├── ref_01.jpg
    │       └── ...
    ├── <lora_name>_raw/          # Phase 3: Raw scraped images (pre-verification)
    │   └── ...
    └── <lora_name>/              # Phase 4–5: Verified + captioned training set
        ├── image001.png
        ├── image001.txt
        └── ...

Privacy Rules (CRITICAL — All Phases)

  • NO DATA INSPECTION: Do NOT cat, read, or analyze image file contents or .txt caption files.
  • NO CLOUD UPLOAD: All face verification (DeepFace) must run locally. Never send images to cloud APIs.
  • NO DATA LEAKAGE: Do not describe dataset details (person names, attributes) to the LLM unnecessarily.
  • Treat datasets as opaque binary blobs except when running local scripts.

Quality Standards (SDXL)

  • Resolution: 1024×1024 minimum after crop
  • Format: Convert all to PNG before training
  • No black borders: Run autocrop before final save
  • Dataset diversity: ≥30% clothed/natural skin shots

Scripts

ScriptLocationPurpose
tag_batch.pyskills/lora-pipeline/scripts/tag_batch.pyLocal WD14 ONNX tagger for a directory
smart_crop.pyskills/lora-pipeline/scripts/smart_crop.pyInteractive or automated single-subject cropping
batch_lora_train.pyskills/lora-pipeline/scripts/batch_lora_train.pyKohya batch training runner for RunPod

Sub-Agent Protocol

Each phase file contains:

  1. Input Contract — what must already exist before this phase starts
  2. Output Contract — what this phase produces
  3. Completion Signal — how to report back (sessions_send + status file fallback)
  4. Error Escalation — sub-agent reports to parent, never self-escalates model tier

Comments

Loading comments...