Ml Experiment Tracker
v0.1.0Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment def...
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the provided script and docs: the skill generates structured experiment plans and suggests logging to trackers. There are no unexpected required binaries, env vars, or services.
Instruction Scope
SKILL.md directs the agent to run the bundled script and read the local tracking guide. The script only reads an optional JSON input, validates size, and writes an output file in json/md/csv formats — it does not access external endpoints or arbitrary system credentials.
Install Mechanism
No install spec — instruction-only plus a small Python script. This is low-risk; the script uses only Python stdlib and writes local output files.
Credentials
The skill requests no environment variables or secrets. Recommendations to log to MLflow are advisory only and do not require embedded credentials in the skill.
Persistence & Privilege
Skill is not forced always-on and does not modify agent/system configurations. It is user-invocable and may be invoked by the model (normal behavior).
Assessment
This skill appears coherent and low-risk: it only generates experiment plans and writes them to a user-specified output path. Before running, (1) choose a safe output path to avoid overwriting important files, (2) inspect the included script if you have concerns — it's short and uses only standard libraries, (3) if you plan to integrate the plan with an external tracker (e.g., MLflow), provide credentials to that tracker separately and review any integration code before giving secrets, and (4) run the script in a sandbox or CI environment if you are running untrusted inputs. Overall, no unrelated credentials or network calls are requested by the skill.Like a lobster shell, security has layers — review code before you run it.
latest
ML Experiment Tracker
Overview
Generate structured experiment plans that can be logged consistently in experiment tracking systems.
Workflow
- Define dataset, target task, model family, and parameter search space.
- Define metrics and acceptance thresholds before training.
- Produce run plan with version and artifact expectations.
- Export the run plan for execution in tracking tools.
Use Bundled Resources
- Run
scripts/build_experiment_plan.pyto generate consistent run plans. - Read
references/tracking-guide.mdfor reproducibility checklist.
Guardrails
- Keep inputs explicit and machine-readable.
- Always include metrics and baseline criteria.
Comments
Loading comments...
