Dl Transformer Finetune
v0.1.0Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning wo...
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description match the included artifacts: SKILL.md, a finetune guidance doc, and a Python script that builds run plans and model-card skeletons. No unrelated binaries, env vars, or services are requested.
Instruction Scope
SKILL.md instructs the agent to run the bundled scripts and consult the reference guide; the script only reads an optional JSON input and writes an output file (json/md/csv). This stays within the stated purpose, but note the script will create/overwrite files at the provided output path and can load a user-specified input file (size-limited).
Install Mechanism
No install spec; this is instruction-only with a small included script. Nothing is downloaded or extracted from external URLs.
Credentials
No environment variables, credentials, or config paths are requested. The script does not access other system credentials or external services.
Persistence & Privilege
always:false and no modifications to other skills or system-wide settings. The skill can be invoked autonomously per platform defaults, but it does not request elevated or persistent privileges.
Assessment
This skill appears to do what it says: generate finetuning run plans. Before installing or running it, consider: (1) review the bundled script yourself — it writes files to whatever output path you provide and can overwrite existing files, so avoid privileged/system paths; (2) prefer running with --dry-run first to inspect output without side effects; (3) do not pass secrets or credentials in the optional input JSON; (4) validate any datasets or metrics referenced (license/risk notes are included but not enforced); (5) because the platform allows autonomous invocation, restrict when/where the agent can run this skill if you want to avoid unexpected file writes. Overall the package is coherent and self-contained.Like a lobster shell, security has layers — review code before you run it.
latest
DL Transformer Finetune
Overview
Generate reproducible fine-tuning run plans for transformer models and downstream tasks.
Workflow
- Define base model, task type, and dataset.
- Set training hyperparameters and evaluation cadence.
- Produce run plan plus model card skeleton.
- Export configuration-ready artifacts for training pipelines.
Use Bundled Resources
- Run
scripts/build_finetune_plan.pyfor deterministic plan output. - Read
references/finetune-guide.mdfor hyperparameter baseline guidance.
Guardrails
- Keep run plans reproducible with explicit seeds and output directories.
- Include evaluation and rollback criteria.
Comments
Loading comments...
