Dl Transformer Finetune

v0.1.0

Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning wo...

0· 508· 1 versions· 1 current· 2 all-time· Updated 22h ago· MIT-0
byMuhammad Mazhar Saeed@0x-professor

Install

openclaw skills install dl-transformer-finetune

DL Transformer Finetune

Overview

Generate reproducible fine-tuning run plans for transformer models and downstream tasks.

Workflow

  1. Define base model, task type, and dataset.
  2. Set training hyperparameters and evaluation cadence.
  3. Produce run plan plus model card skeleton.
  4. Export configuration-ready artifacts for training pipelines.

Use Bundled Resources

  • Run scripts/build_finetune_plan.py for deterministic plan output.
  • Read references/finetune-guide.md for hyperparameter baseline guidance.

Guardrails

  • Keep run plans reproducible with explicit seeds and output directories.
  • Include evaluation and rollback criteria.

Version tags

latestvk97cc06yfx84j5pb3195j1b48n81ws15