Install
openclaw skills install databricks-helperQuery and control Databricks jobs via text by checking status, listing recent runs, finding failures, and triggering pipelines using the REST API.
openclaw skills install databricks-helperQuery, inspect, and control your Databricks workspace from plain text. Check job status, rerun/cancel runs, inspect logs, explore Unity Catalog, and run read-only SQL without opening the UI.
Use this skill when the user says things like:
DATABRICKS_HOST — workspace URL, e.g. https://adb-1234567890.12.azuredatabricks.netDATABRICKS_TOKEN — personal access tokenDATABRICKS_SQL_WAREHOUSE_ID — required for catalog preview + SQLDATABRICKS_SLA_MINUTES for SLA alerts (default 60)DATABRICKS_MAX_ROWS (default 200) row cap for SQL outputDATABRICKS_SQL_TIMEOUT_SEC (default 60) SQL wait timeoutDATABRICKS_ALLOW_WRITE_SQL — only set true if DDL/DML should be allowednpx clawhub@latest install databricks-helper
Check recent jobs
"check my databricks jobs"
Lists the last 10 job runs with status, duration, and run URLs.
Find failures
"what failed in databricks today"
Filters runs from the last 24 hours and prints failed ones with error snippets.
Trigger or retry pipelines
"run pipeline customer_ingestion" "retry databricks run 123"
Starts a new run or reruns failed tasks via the Jobs Repair API.
Cancel a run
"cancel databricks run 123"
Calls jobs/runs/cancel with safety checks and prints confirmation.
Live monitoring + analytics
"what's running now" "databricks sla watch" "databricks success summary"
Shows active runs with elapsed time, highlights SLA breaches, and prints 24h/7d success/failure counts plus top failing jobs (with adjustable time ranges).
Catalog + SQL exploration
"list catalogs" "list tables in main bronze" "preview table main.bronze.events" "run sql select * from main.bronze.events"
Uses the Unity Catalog API for discovery and runs read-only SQL through the configured warehouse with enforced row limits.
python scripts/databricks_helper.py list-runs
python scripts/databricks_helper.py failures --hours 24
python scripts/databricks_helper.py run-job "job name"
python scripts/databricks_helper.py retry-run 123
python scripts/databricks_helper.py cancel-run 123
python scripts/databricks_helper.py run-details 123
python scripts/databricks_helper.py running-jobs --pattern nightly
python scripts/databricks_helper.py jobs --tag env=prod
python scripts/databricks_helper.py sla-watch --minutes 90
python scripts/databricks_helper.py summary
python scripts/databricks_helper.py top-failures --hours 48
python scripts/databricks_helper.py list-catalogs
python scripts/databricks_helper.py list-schemas --catalog main
python scripts/databricks_helper.py list-tables --catalog main --schema bronze
python scripts/databricks_helper.py preview-table main.bronze.events --limit 20
python scripts/databricks_helper.py run-sql --query "SELECT * FROM main.bronze.events" --limit 50
Plain text. Each run: job name, status (SUCCESS/FAILED/RUNNING), start/end, duration, SLA status, error (if failed). Catalog + SQL commands return textual lists or tabular results.
DATABRICKS_ALLOW_WRITE_SQL=true. Limits/timeouts are applied to avoid runaway scans.DATABRICKS_SLA_MINUTES or per-command flags.