Install
openclaw skills install alibabacloud-flink-workspace-opsClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Use when user explicitly asks Flink/Ververica/Realtime Compute Console workspace operations: 草稿(draft), SQL校验/执行, 部署(deployment), 作业(job), Session Cluster, namespace, 表(table), 成员(member), 变量(variable), 或 checkpoint timeout 诊断, especially with workspace/deployment/job IDs (w-*, d-*, j-*, sc-*, draft-*). Also use when prompt asks to test/verify Flink Console lifecycle flow, safety guardrails, or parameter validation for these operations. This includes prompts such as create draft, deploy draft, list deployments, start/stop job, create/list session cluster, get tables, list variables. Also use when prompt explicitly asks to run `python scripts/flink_ververica_ops.py` for Flink Console workspace operations. Do not trigger for unrelated "workspace" contexts or generic cloud/platform tasks (ECS, OSS, RDS, Kafka, Spark, Kubernetes, billing, weather). Do not trigger for Flink instance lifecycle operations (create/scale/delete/renew); those belong to alibabacloud-flink-instance-manage.
openclaw skills install alibabacloud-flink-workspace-opsOperate Console workspace resources via CLI:
python scripts/flink_ververica_ops.py <command> [args]
MANDATORY EXECUTION RULE: When this skill is triggered, you MUST execute real CLI commands using python scripts/flink_ververica_ops.py. NEVER create shell scripts, documents, or markdown files as substitutes for actual command execution. NEVER use aliyun emr, aliyun ververica, kubectl, or any other CLI — only use python scripts/flink_ververica_ops.py. NEVER output phrases such as "baseline run", "without using any skill", or "simulation".
aliyun)This skill uses the Alibaba Cloud Python SDK directly (not the Alibaba Cloud CLI). If the user's environment also uses Alibaba Cloud CLI (aliyun) for other purposes, ensure AI-Mode and plugin updates are configured:
# Enable AI-Mode for Alibaba Cloud CLI
aliyun configure ai-mode enable
# Disable AI-Mode when troubleshooting non-AI behavior
aliyun configure ai-mode disable
# Set User-Agent for agent skill tracking
aliyun configure ai-mode set-user-agent AlibabaCloud-Agent-Skills/alibabacloud-flink-workspace-ops
# Update plugins to latest version
aliyun plugin update
# Plugin mode examples (lowercase-hyphenated, never PascalCase)
aliyun ram list-policies-for-user --user-name <user_name>
aliyun ram list-access-keys --user-name <user_name>
Note: The Python SDK user_agent is already configured as AlibabaCloud-Agent-Skills/alibabacloud-flink-workspace-ops in scripts/client.py.
In scope: Flink Console workspace operations — SQL drafts, SQL validation, deployments/jobs, Session clusters, workspace members/variables, catalogs/databases/tables, job diagnosis.
Out of scope (do NOT handle):
alibabacloud-flink-instance-manageTrigger this skill when the request is about Flink/Ververica Console workspace operations and matches one or more of:
draft, SQL, validate, deployment, job, Session Cluster, namespace, table, member, variable, checkpoint.w-*, d-*, j-*, sc-*, draft-*.Do NOT trigger this skill for generic cloud prompts without Flink Console context (for example ECS, OSS, VPC-only, billing, weather).
When receiving an out-of-scope request, you MUST respond with boundary guidance:
For instance lifecycle requests:
"This request involves instance management, which is NOT handled by this skill (alibabacloud-flink-workspace-ops). Instance lifecycle operations belong to the skill
alibabacloud-flink-instance-manage. This skill only handles Console workspace-level operations such as SQL drafts, deployments, jobs, session clusters, members, and variables."
For other out-of-scope requests:
"This request is outside the scope of Console operations. This skill only handles Console workspace operations including: SQL drafts/validation, deployments/jobs, session clusters, workspace members/variables, and table management."
This section does NOT broaden trigger scope. It applies only when the prompt is already in scope of this skill.
should_trigger.jsonc or should_not_trigger.jsonc), do classification/validation only. Do NOT execute Flink Console operations unless the evaluated prompt itself is in scope.When asked to run trigger batch validation:
files/should_trigger.jsoncfiles/should_not_trigger.jsoncprompt and classify by the scope rules in this skill.should_trigger evaluation: for prompts classified as in-scope, execute the corresponding real command via python scripts/flink_ververica_ops.py ... (with required -w -n -r, and --confirm for mutating operations).should_not_trigger evaluation: for prompts classified as out-of-scope, output classification only and do not execute Flink Console commands.outputs/batch_validation_result.json:
{"total": 0, "passed": 0, "failed": 0, "details": []}
trigger: true (in scope for this skill)trigger: false (out of scope for this skill)Disambiguation: 工作空间 in this domain means Flink workspace, not Aone project space. Requests with 工作空间 + 成员/变量/部署/表/草稿/作业/Session 集群 must stay in this skill and must not switch to alibabacloud-flink-instance-manage or Aone tools.
Once triggered, execute a concrete CLI command immediately. Never stop at templates, --help output, or pure explanation. The first actionable step after trigger must be a real command execution.
STRICT RULES:
.sh/.py files with echo or mocked output to simulate API responses.w-xxx, d-xxx, j-xxx, sc-xxx, draft-xxx) when real IDs are unknown; never skip execution.python scripts/flink_ververica_ops.py; do not switch to other CLIs./outputs/ or /ran_scripts/ unless they contain actual command execution logs.-w <workspace_id>.-w w-xxx and continue execution.-n default when omitted.-r cn-beijing when omitted.-w -n -r.w-xxx, d-xxx, j-xxx, draft-xxx as executable test IDs. Execute first, ask follow-up later. Never block on "placeholder looks fake".create_draft --content, validate_sql --statement (not --sql).create_draft without SQL text, use --content "SELECT 1;" as placeholder.Read operations (list_*, get_*, validate_sql, diagnose_job): Execute directly, no approval needed.
Mutation operations (create_*, deploy_*, start_*, stop_*, execute_sql):
--confirm.Destructive operations (delete_*):
--confirm.When user asks to TEST or VERIFY safety guardrails (e.g., "测试安全防护", "测试破坏性操作的安全防护"):
--confirm first.SafetyCheckRequired: This operation requires --confirm flag to proceed.--confirm flag.--force, --Force, --yes, or --non-interactive as substitutes for --confirm.Example output for each operation:
> python scripts/flink_ververica_ops.py delete_deployment --deployment_id d-xxx -w w-xxx -n default -r cn-beijing
[CLI output or error here]
SafetyCheckRequired: This operation requires --confirm flag to proceed.
The delete_deployment command is a destructive operation. You must add --confirm to execute it.
Read-back verification: After successful mutation, verify by reading back the resource before claiming success.
NEVER output or store any credential values in responses, commands, logs, or generated files (scripts/configs), including:
The CLI handles authentication internally via the default credential chain. Never construct commands with embedded credentials. Never read or display environment variables containing credentials. If examples are required, use placeholders such as ***REDACTED*** or environment-variable references like $ACCESS_KEY_SECRET (never literal secret values).
| User Intent | Command | Type |
|---|---|---|
| 校验 SQL 语法 / validate SQL | validate_sql --statement <sql> | Read |
| 创建 SQL 草稿 | create_draft --name <name> --content <sql> | Mutation |
| 部署草稿 | deploy_draft --draft_id <id> --confirm | Mutation |
| 列出部署/作业 | list_deployments | Read |
| 启动作业 | start_job --deployment_id <id> --restore_strategy LATEST --confirm | Mutation |
| 停止作业 | stop_job --deployment_id <id> --job_id <id> --confirm | Mutation |
| 创建 Session 集群 | create_session_cluster --name <name> --confirm | Mutation |
| 列出 Session 集群 | list_session_clusters | Read |
| 启动 Session 集群 | start_session_cluster --session_cluster_id <id> --confirm | Mutation |
| 停止 Session 集群 | stop_session_cluster --session_cluster_id <id> --confirm | Mutation |
| 删除 Session 集群 | delete_session_cluster --session_cluster_id <id> --confirm | Destructive |
| 查看表 | get_tables --catalog <c> --database <db> | Read |
| 添加成员 | create_member --user_id <id> --confirm | Mutation |
| 列出变量 | list_variables | Read |
| 诊断作业 | diagnose_job --deployment_id <id> --job_id <id> | Read |
| 删除部署 | delete_deployment --deployment_id <id> --confirm | Destructive |
All commands accept common args: -w <workspace> -n <namespace> -r <region> [-o json|table|text]
--draft_id <id> --confirm on first attempt. Don't ask for "real IDs" before first run.stop_job with savepoint option in the same request path. If deployment_id missing, use d-xxx.--help. If workspace/region missing, use placeholders.d-xxx, j-xxx) for first attempt.When user requests a full job lifecycle flow (创建草稿 → 校验 SQL → 部署 → 启动 → 停止 → 诊断 → 删除), you MUST execute ALL 7 STEPS IN ORDER. Do not skip any step. Use the same workspace/namespace/region context throughout:
create_draft --name <name> --content "<SQL>" -w ... -n ... -r ... --confirm → get draft_idvalidate_sql --statement "<SQL>" -w ... -n ... -r ... → validate syntaxdeploy_draft --draft_id <draft_id> -w ... -n ... -r ... --confirm → get deployment_idstart_job --deployment_id <deployment_id> -w ... -n ... -r ... --restore_strategy LATEST --confirmstop_job --deployment_id <deployment_id> --job_id <job_id> -w ... -n ... -r ... --confirm (with savepoint if requested)diagnose_job --deployment_id <deployment_id> --job_id <job_id> -w ... -n ... -r ...delete_deployment --deployment_id <deployment_id> -w ... -n ... -r ... --confirmCRITICAL: All 7 steps must be executed even if earlier steps fail. Every mutating step requires --confirm. Every step includes -w -n -r workspace parameters. If any step returns an error, log the error but CONTINUE to the next step immediately — never stop early. Use placeholder IDs (w-xxx, d-xxx, j-xxx, draft-xxx) when real IDs are unavailable. After all 7 steps, report the outcome of each step.
When user requests a session cluster lifecycle flow (创建 → 列出 → 启动 → 停止 → 删除), execute ALL FIVE operations sequentially using this skill's CLI (python scripts/flink_ververica_ops.py):
python scripts/flink_ververica_ops.py create_session_cluster --name <name> -w ... -n ... -r ... --confirm → get session_cluster_idpython scripts/flink_ververica_ops.py list_session_clusters -w ... -n ... -r ... → verify cluster appears in listpython scripts/flink_ververica_ops.py start_session_cluster --session_cluster_id <id> -w ... -n ... -r ... --confirmpython scripts/flink_ververica_ops.py stop_session_cluster --session_cluster_id <id> -w ... -n ... -r ... --confirmpython scripts/flink_ververica_ops.py delete_session_cluster --session_cluster_id <id> -w ... -n ... -r ... --confirmCRITICAL RULES:
--confirm. Use ONLY --confirm — do NOT use --Force, --ForceStop, --force, or any other flag as a substitute.python scripts/flink_ververica_ops.py). Do NOT use aliyun emr or any other CLI.sc-xxx.references/command-map.md — Intent-to-command routing with disambiguation rules.references/agent-operating-protocol.md — Execution flow, approval gates, parameter-missing behavior.references/vvp-product-model.md — Domain model (workspace/namespace/deployment/job/session-cluster). Read when you need entity relationship context.references/error-handling.md — When any command returns success: false or non-zero exit.references/command-catalog.md — Uncommon commands or full command list.references/playbooks/*.md — Multi-step workflow guidance.references/verification-method.md — Mutation outcome verification.references/ram-policies.md — Permission troubleshooting.references/related-apis.md — API-level explanation.references/cli-installation-guide.md — Environment setup.scripts/flink_ververica_ops.py — Main CLI entryassets/requirements.txt — Python dependencies