Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
LoRA Toolkit
v1.0.0Configure, estimate, and generate LoRA fine-tuning scripts for LLMs. Input: base model name, dataset size, GPU spec. Output: training config, PEFT script, co...
⭐ 0· 11·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description align with the provided shell script and SKILL.md: the tool generates configs, estimates cost/VRAM, validates dataset format, and emits a Python training script. Required resources (none declared) and included files are consistent with this purpose.
Instruction Scope
Instructions are narrowly scoped to generating configs, validating datasets, and producing a training script. However the generated Python uses trust_remote_code=True and will load pretrained models from external model repositories when executed — that can cause arbitrary code execution coming from a model repo. The shell script also creates a directory under $HOME/.local/share/bytesagain-lora-toolkit and writes files there and into the working directory (train.py) which you should review before running.
Install Mechanism
No install spec; the skill is instruction-only plus an included shell script. The README suggests pip installing well-known ML packages (transformers, peft, trl, bitsandbytes, etc.) which is expected for this use case. Nothing is downloaded from obscure URLs by the skill itself.
Credentials
The skill does not request environment variables, credentials, or access to unrelated configuration paths. It stores data under the user's home directory (local, per-user) which is proportionate for a CLI helper.
Persistence & Privilege
always:false and no special persistence requested. The script creates and uses a per-user data directory only for its own files; it does not modify other skills or global agent settings.
Assessment
This skill appears coherent for generating LoRA configs and training scripts, but take the following precautions before running anything it produces: 1) Review the generated train.py and any other files the script writes. The training script sets trust_remote_code=True when loading pretrained models — that means code fetched from a model repository can run on your machine. Only load models from sources you trust. 2) Verify dataset contents (no secrets) before using them in training. 3) Be aware of resource and cost implications if you run the training (GPU hours, cloud billing). 4) If you want stricter safety, remove or change trust_remote_code in the generated script and pin models to trusted repos/commits. If you need more assurance, request the full contents of the truncated validate function or a provenance/hosting URL for the skill's author and homepage.Like a lobster shell, security has layers — review code before you run it.
latestvk9718znee9771yr860qdxfnm51849har
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
