Pilot Ml Training Pipeline Setup

v1.0.0

Deploy an end-to-end ML training pipeline with 4 agents. Use this skill when: 1. User wants to set up a machine learning training pipeline 2. User is configu...

0· 68·0 current·0 all-time
byCalin Teodor@teoslayer

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for teoslayer/pilot-ml-training-pipeline-setup.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Pilot Ml Training Pipeline Setup" (teoslayer/pilot-ml-training-pipeline-setup) from ClawHub.
Skill page: https://clawhub.ai/teoslayer/pilot-ml-training-pipeline-setup
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: pilotctl, clawhub
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install pilot-ml-training-pipeline-setup

ClawHub CLI

Package manager switcher

npx clawhub@latest install pilot-ml-training-pipeline-setup
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the runtime instructions: the SKILL.md walks through installing agent skills (via clawhub), setting hostnames (pilotctl), writing a JSON manifest, and performing handshakes — all expected for deploying a multi-agent ML pipeline.
Instruction Scope
Instructions are scoped to pipeline setup: ask role/prefix, run clawhub install for role-specific skills, call pilotctl to set hostname/handshake, and write a role manifest to ~/.pilot/setups/ml-training-pipeline.json. The only file/path the skill asks to write is a role-specific config in the user's Pilot config directory, which is appropriate for this purpose.
Install Mechanism
This is instruction-only (no install spec). It instructs the user to run clawhub to install other pilot-* skills; that is reasonable for a meta-setup skill. There are no downloads or archive extractions in this skill itself.
Credentials
The skill requests no environment variables or credentials and only requires the binaries pilotctl and clawhub. Those requirements align with the documented commands; no unrelated secrets are requested.
Persistence & Privilege
always:false and normal model invocation; the skill writes only its own manifest to ~/.pilot/setups/, and does not modify other skills or system-wide settings. No elevated persistence or cross-skill config changes are requested.
Assessment
This skill appears to do what it says: it sets up 4 agents by installing role-specific pilot-* skills and configuring hostnames/handshakes. Before running it, verify you trust the pilotctl and clawhub binaries (install sources and checksums), review the pilot-* skills that will be installed (they may request credentials or network access), and only perform handshakes with intended hostnames: handshakes create mutual trust and enable file/metric transfer between agents, so confirm network endpoints and confidentiality requirements for your datasets and models.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspilotctl, clawhub
latestvk97123sgmcqyp9t0w3v91znd2h85c9z3
68downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

ML Training Pipeline Setup

Deploy 4 agents spanning data prep, training, evaluation, and serving.

Roles

RoleHostnameSkillsPurpose
data-prep<prefix>-data-preppilot-dataset, pilot-share, pilot-task-chainCleans and transforms datasets
trainer<prefix>-trainerpilot-dataset, pilot-model-share, pilot-metrics, pilot-task-chainTrains models, tracks metrics
evaluator<prefix>-evaluatorpilot-model-share, pilot-metrics, pilot-review, pilot-task-chainEvaluates and gates promotion
serving<prefix>-servingpilot-model-share, pilot-health, pilot-webhook-bridge, pilot-load-balancer, pilot-metricsServes inference requests

Setup Procedure

Step 1: Ask the user which role this agent should play and what prefix to use.

Step 2: Install the skills for the chosen role:

# For data-prep:
clawhub install pilot-dataset pilot-share pilot-task-chain
# For trainer:
clawhub install pilot-dataset pilot-model-share pilot-metrics pilot-task-chain
# For evaluator:
clawhub install pilot-model-share pilot-metrics pilot-review pilot-task-chain
# For serving:
clawhub install pilot-model-share pilot-health pilot-webhook-bridge pilot-load-balancer pilot-metrics

Step 3: Set the hostname:

pilotctl --json set-hostname <prefix>-<role>

Step 4: Write the role-specific JSON manifest to ~/.pilot/setups/ml-training-pipeline.json.

Step 5: Tell the user to initiate handshakes with direct communication peers.

Manifest Templates Per Role

data-prep

{
  "setup": "ml-training-pipeline", "role": "data-prep", "role_name": "Data Preparation",
  "hostname": "<prefix>-data-prep",
  "description": "Cleans, validates, and transforms raw datasets. Shares processed data with the trainer.",
  "skills": {
    "pilot-dataset": "Exchange structured datasets with schema negotiation.",
    "pilot-share": "Send cleaned dataset files to <prefix>-trainer.",
    "pilot-task-chain": "Chain data prep steps into sequential pipeline."
  },
  "peers": [{ "role": "trainer", "hostname": "<prefix>-trainer", "description": "Receives prepared datasets" }],
  "data_flows": [{ "direction": "send", "peer": "<prefix>-trainer", "port": 1001, "topic": "dataset-ready", "description": "Cleaned datasets" }],
  "handshakes_needed": ["<prefix>-trainer"]
}

trainer

{
  "setup": "ml-training-pipeline", "role": "trainer", "role_name": "Model Trainer",
  "hostname": "<prefix>-trainer",
  "description": "Receives prepared datasets, runs training jobs, tracks metrics, and shares trained model artifacts.",
  "skills": {
    "pilot-dataset": "Receive prepared datasets from data-prep.",
    "pilot-model-share": "Send trained model checkpoints to evaluator.",
    "pilot-metrics": "Track and publish training loss, accuracy, epochs.",
    "pilot-task-chain": "Chain training steps sequentially."
  },
  "peers": [
    { "role": "data-prep", "hostname": "<prefix>-data-prep", "description": "Sends prepared datasets" },
    { "role": "evaluator", "hostname": "<prefix>-evaluator", "description": "Receives trained models" }
  ],
  "data_flows": [
    { "direction": "receive", "peer": "<prefix>-data-prep", "port": 1001, "topic": "dataset-ready", "description": "Cleaned datasets" },
    { "direction": "send", "peer": "<prefix>-evaluator", "port": 1001, "topic": "training-complete", "description": "Model checkpoints and metrics" }
  ],
  "handshakes_needed": ["<prefix>-data-prep", "<prefix>-evaluator"]
}

evaluator

{
  "setup": "ml-training-pipeline", "role": "evaluator", "role_name": "Model Evaluator",
  "hostname": "<prefix>-evaluator",
  "description": "Scores trained models against benchmarks and gates promotion to serving.",
  "skills": {
    "pilot-model-share": "Receive models from trainer, promote approved models to serving.",
    "pilot-metrics": "Compare benchmarks, detect drift.",
    "pilot-review": "Gate model promotion with approval workflow.",
    "pilot-task-chain": "Chain evaluation steps."
  },
  "peers": [
    { "role": "trainer", "hostname": "<prefix>-trainer", "description": "Sends trained models" },
    { "role": "serving", "hostname": "<prefix>-serving", "description": "Receives approved models" }
  ],
  "data_flows": [
    { "direction": "receive", "peer": "<prefix>-trainer", "port": 1001, "topic": "training-complete", "description": "Model checkpoints" },
    { "direction": "send", "peer": "<prefix>-serving", "port": 1001, "topic": "model-approved", "description": "Approved models" },
    { "direction": "receive", "peer": "<prefix>-serving", "port": 1002, "topic": "inference-metrics", "description": "Drift detection data" }
  ],
  "handshakes_needed": ["<prefix>-trainer", "<prefix>-serving"]
}

serving

{
  "setup": "ml-training-pipeline", "role": "serving", "role_name": "Model Server",
  "hostname": "<prefix>-serving",
  "description": "Loads approved models, serves inference, monitors health, and load-balances.",
  "skills": {
    "pilot-model-share": "Receive approved models from evaluator.",
    "pilot-health": "Monitor inference endpoint health and latency.",
    "pilot-webhook-bridge": "Trigger external alerts on serving failures.",
    "pilot-load-balancer": "Distribute inference requests across replicas.",
    "pilot-metrics": "Report QPS, latency, drift metrics to evaluator."
  },
  "peers": [{ "role": "evaluator", "hostname": "<prefix>-evaluator", "description": "Sends approved models, receives metrics" }],
  "data_flows": [
    { "direction": "receive", "peer": "<prefix>-evaluator", "port": 1001, "topic": "model-approved", "description": "Approved models" },
    { "direction": "send", "peer": "<prefix>-evaluator", "port": 1002, "topic": "inference-metrics", "description": "Inference metrics for drift" }
  ],
  "handshakes_needed": ["<prefix>-evaluator"]
}

Data Flows

  • data-prep → trainer : cleaned datasets (port 1001)
  • trainer → evaluator : model checkpoints and metrics (port 1001)
  • evaluator → serving : approved models (port 1001)
  • serving → evaluator : inference metrics for drift detection (port 1002)

Workflow Example

# On data-prep:
pilotctl --json send-file <prefix>-trainer ./datasets/training-v5.parquet
pilotctl --json publish <prefix>-trainer dataset-ready '{"name":"training-v5","rows":150000}'
# On trainer:
pilotctl --json send-file <prefix>-evaluator ./models/resnet-v5.pt
pilotctl --json publish <prefix>-evaluator training-complete '{"model":"resnet-v5","accuracy":0.967}'
# On evaluator:
pilotctl --json send-file <prefix>-serving ./models/resnet-v5.pt
pilotctl --json publish <prefix>-serving model-approved '{"model":"resnet-v5","benchmark":0.971}'

Dependencies

Requires pilot-protocol skill, pilotctl binary, clawhub binary, and a running daemon.

Comments

Loading comments...