Fine-Tuning
v1.0.0Fine-tune LLMs with data preparation, provider selection, cost estimation, evaluation, and compliance checks.
⭐ 2· 687·5 current·5 all-time
byIván@ivangdavila
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description match the provided files: the SKILL.md and supporting docs are a thorough fine-tuning playbook (data prep, providers, costs, evaluation, compliance). The requested capabilities are appropriate for a fine-tuning helper.
Instruction Scope
The runtime instructions and example code assume use of provider SDKs/CLIs (OpenAI client/CLI, Bedrock, huggingface-cli, pip installs/transfers for air-gapped systems) and local dataset files. The SKILL metadata declares no required env vars or credentials, yet the instructions require API keys, CLI auth, access to local dataset files, and network access to cloud providers — a scope mismatch. The docs also show uploading datasets and calling model endpoints which would transmit potentially sensitive data to third-party providers if the user follows examples.
Install Mechanism
This is instruction-only (no install spec), so nothing will be automatically written or executed by installation. However, the docs instruct users to run package installs, pip downloads, huggingface and OpenAI CLI commands and to pre-download model artifacts for air-gapped setups — all of which require manual actions and network access. There is no automatic installer, which lowers supply-chain risk but does not remove runtime risks if users execute the suggested commands.
Credentials
The registry metadata lists no required environment variables or primary credential, yet multiple code snippets and CLI commands clearly require provider credentials (OPENAI API keys, AWS credentials for Bedrock/Bedrock via AWS, Hugging Face auth) and may encourage enabling 'data sharing' discounts. Asking users to run these without declaring them is a transparency gap. The compliance guidance also emphasizes scanning/removing PII, but example eval/generation snippets call out to remote providers (client.chat/completions.create), which would send dataset content off-host unless an on-prem option is used.
Persistence & Privilege
The skill is not always-enabled and does not request special persistence or system-wide changes. It does not modify other skills or claim elevated privileges. Autonomous invocation is allowed (platform default) but is not combined with other major red flags here.
What to consider before installing
This skill is a comprehensive fine-tuning guide but is missing explicit declarations of operational needs. Before installing/using it: 1) Expect to need provider credentials (OpenAI key, AWS creds for Bedrock, Hugging Face token) and ensure you only provide those in a controlled environment; the skill metadata does not list them. 2) Be careful with real datasets: the examples upload local data to remote APIs and could send PII to third parties — follow the compliance.md PII-remediation steps and test on non-sensitive samples first. 3) The skill is instruction-only (no install) so it won't auto-run code, but following its examples will run CLI/SDK commands and pip installs — review commands before executing and prefer isolated/sandboxed environments. 4) If you require strict privacy, prefer the on‑prem instructions in compliance.md and verify how to authenticate and download models offline. 5) Because the source/homepage is unknown, exercise extra caution: verify code snippets and provider commands against official provider docs before use.Like a lobster shell, security has layers — review code before you run it.
latestvk97dndr557h2vpax3g5pvkq9j9812ap8
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
