Install
openclaw skills install qualia-skillFine-tune robot foundation models on cloud GPUs — π0.5, π0, GR00T, SmolVLA, ACT, and more.
openclaw skills install qualia-skillFine-tune Vision-Language-Action (VLA) models for robotics on cloud GPUs.
export QUALIA_API_KEY="your-api-key"
They probably won't give you everything upfront. Here's what you need and how to get it:
your-org/your-dataset)models and help them choose:
dataset-keys on their dataset, then models to see required slots, and map them automatically. Confirm with the user before launching.If the user already has a project, use it. Otherwise create one.
| Symptom | Likely cause | Fix |
|---|---|---|
Job stuck at credit_validation | Insufficient credits | Run credits, tell user to top up |
Fails at dataset_preprocessing | Bad camera mapping or invalid dataset | Re-check dataset-keys output, verify mapping |
Fails at instance_booting | GPU capacity issue | Try a different instance type or region |
| Job failed with no clear error | Check phase events | Run status <job_id> and read the event messages |
Always run status <job_id> and share the full phase history with the user when debugging.
# See what models are available (always check — new ones are added regularly)
python3 {baseDir}/scripts/qualia.py models
# Check GPU options and pricing
python3 {baseDir}/scripts/qualia.py instances
# Check your credit balance
python3 {baseDir}/scripts/qualia.py credits
# 1. Discover image keys in your dataset
python3 {baseDir}/scripts/qualia.py dataset-keys your-org/your-dataset
# 2. Create a project
python3 {baseDir}/scripts/qualia.py project-create "My Robot"
# 3. Launch training
python3 {baseDir}/scripts/qualia.py finetune <project_id> <vla_type> your-org/your-dataset 4 \
'{"cam_1": "observation.images.top"}' \
--model <base_model_id> \
--name "My run"
# 4. Monitor
python3 {baseDir}/scripts/qualia.py status <job_id>
Notes:
models first to see which VLA types require --model and which don'tmodels) to dataset image keys (from dataset-keys)cam_1, cam_2, cam_3) but the underlying models have a specific input order. Map semantically using these known orders:
cam_1 = base/overview camera, cam_2 = left wrist/arm, cam_3 = right wrist/armcam_1 = base/overview camera, cam_2 = left wrist/arm, cam_3 = right wrist/armcam_1 = primary camera, cam_2/cam_3 = secondary viewscontext_camera or base_0 → cam_1; left_wrist ≈ left_arm → cam_2; right_wrist ≈ right_arm → cam_3--model for types that don't support custom modelsinstances to get credits/hr, multiply by hours. Tell the user the estimated cost before confirming.python3 {baseDir}/scripts/qualia.py projects # List projects and jobs
python3 {baseDir}/scripts/qualia.py status <job_id> # Job status and phase history
python3 {baseDir}/scripts/qualia.py cancel <job_id> # Cancel a running job
python3 {baseDir}/scripts/qualia.py project-delete <project_id> # Delete a project
# Get defaults
python3 {baseDir}/scripts/qualia.py hyperparams <vla_type> [model_id]
# Validate overrides
python3 {baseDir}/scripts/qualia.py hyperparams-validate <vla_type> '{"learning_rate": 1e-4}'
# Use in training
python3 {baseDir}/scripts/qualia.py finetune ... --hyper-spec '{"learning_rate": 1e-4, "num_epochs": 50}'
| Flag | Description |
|---|---|
--model <id> | Base model ID (required for some VLA types) |
--name <str> | Job display name |
--instance <id> | GPU instance type |
--region <name> | Cloud region |
--batch-size <n> | Batch size (1–512, default 32) |
--hyper-spec '<json>' | Custom hyperparameters |
--rabc <model_path> | Enable RA-BC with SARM reward model (HF path) |
--rabc-image-key <k> | Image key for reward annotations |
--rabc-head-mode <m> | RA-BC head mode (e.g. sparse) |
Use a trained SARM reward model to weight training samples. Supported on smolvla, pi0, pi05.
python3 {baseDir}/scripts/qualia.py finetune \
<project_id> pi0 your-org/your-dataset 4 \
'{"cam_1": "observation.images.top"}' \
--model lerobot/pi0 \
--rabc your-org/sarm-reward-model \
--rabc-image-key observation.images.top \
--rabc-head-mode sparse
queuing → credit_validation → instance_booting → instance_activation → instance_setup → dataset_preprocessing → training_running → model_uploading → completed
Terminal: completed, failed, cancelled
For the latest models, endpoints, and capabilities — always check the live documentation:
X-API-Key header)