Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Ollama Model Tuner
v1.0.0Locally fine-tune Ollama models, prompts, and LoRAs using custom datasets and evaluation metrics without requiring cloud resources.
⭐ 0· 744·2 current·2 all-time
byGoroni@gblockchainnetwork
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The README/description claims fine‑tuning, LoRA tuning, and benchmarking, but the provided script (scripts/tune.py) only runs inference (ollama.chat) over up to 10 samples. Promised capabilities (training/LoRA application/evaluation metrics) are not implemented. SKILL.md references prompts/system.md and a CLI name (!ollama-model-tuner) that do not exist in the manifest.
Instruction Scope
Runtime instructions point users to run a CLI (!ollama-model-tuner) and reference files that aren't included. The SKILL.md does not explain how to install or provide the executable, so following the instructions will likely fail. The instructions do not request sensitive system files or extra credentials (no evidence of exfiltration), but their vagueness grants broad discretion and is inconsistent with the actual code.
Install Mechanism
There is no install spec (instruction-only), which is low risk, but the included Python script depends on the 'ollama' Python package and a running Ollama service; no dependency or install guidance is provided. This mismatch means the skill may not run as described and could lead users to run ad-hoc install commands from unknown sources.
Credentials
The skill requests no environment variables, credentials, or config paths. The only runtime dependency is the 'ollama' Python client which typically communicates with a local Ollama daemon; this is proportionate to the described local usage.
Persistence & Privilege
The skill is not always-on and does not request elevated or persistent system privileges. It does not modify other skills or system-wide settings in the provided files.
What to consider before installing
This package is misleading/incomplete: it advertises fine‑tuning and LoRA support but only contains a small inference script. Before installing or running it: (1) ask the author for the missing files (prompts/system.md) and a real CLI or entrypoint, (2) verify/obtain clear install instructions for the 'ollama' Python client and any dependencies, (3) review any additional code the author provides to confirm actual training steps, (4) run the code in an isolated environment (container/VM) and with non‑sensitive data, and (5) prefer skills with a verifiable source/homepage or published install mechanisms. If you need true local fine‑tuning, use a vetted tool or official Ollama documentation rather than this incomplete skill.Like a lobster shell, security has layers — review code before you run it.
latestvk976s54v510231a9qpn6b3xskx81hxae
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
