Install
openclaw skills install fine-tuningFine-tune LLMs with data preparation, provider selection, cost estimation, evaluation, and compliance checks.
openclaw skills install fine-tuningUser wants to fine-tune a language model, evaluate if fine-tuning is worth it, or debug training issues.
| Topic | File |
|---|---|
| Provider comparison & pricing | providers.md |
| Data preparation & validation | data-prep.md |
| Training configuration | training.md |
| Evaluation & debugging | evaluation.md |
| Cost estimation & ROI | costs.md |
| Compliance & security | compliance.md |
Before recommending fine-tuning, ask:
| Signal | Recommendation |
|---|---|
| Format/style inconsistency | Fine-tune ✓ |
| Missing domain knowledge | RAG first, then fine-tune if needed |
| High inference volume (>100K/mo) | Fine-tune for cost savings |
| Requirements change frequently | Stick with prompting |
| <50 quality examples | Prompting + few-shot |
| Mistake | Fix |
|---|---|
| Training on inconsistent data | Manual review of 100+ samples before training |
| Learning rate too high | Start with 2e-4 for SFT, 5e-6 for RLHF |
| Expecting new knowledge | Fine-tuning adjusts behavior, not knowledge — use RAG |
| No baseline comparison | Always test base model on same eval set |
| Ignoring forgetting | Mix 20% general data to preserve capabilities |