Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

DW-Copilot

v0.0.1

基于 SpecKit SDD(Spec-Driven Development)方法论的数仓开发 Agent 技能。将自然语言需求经多阶段澄清与收敛,产出符合规范的 Spec 文档、执行计划及可直接落地的 DDL/ETL/调度配置代码。支持自定义平台技术栈和自定义项目公约。

1· 89·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for honghaolee/datawarehouse-copilot.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "DW-Copilot" (honghaolee/datawarehouse-copilot) from ClawHub.
Skill page: https://clawhub.ai/honghaolee/datawarehouse-copilot
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install datawarehouse-copilot

ClawHub CLI

Package manager switcher

npx clawhub@latest install datawarehouse-copilot
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (Spec-driven DW development -> produce spec/plan/task and DDL/ETL/Azkaban configs) aligns with the included templates and workflow files. The resource files legitimately include metadata collection methods (manual/jdbc/openapi/web/hdfs), Azkaban/job examples, and platform/project conventions — all expected for a DW copilot. No unrelated capabilities (e.g., cloud provider admin APIs) are present.
!
Instruction Scope
The SKILL.md and resource templates instruct the Agent to collect metadata via JDBC/OpenAPI/web/HDFS which may require requesting or using sensitive credentials and file paths. The templates explicitly reference environment variable placeholders and system paths (e.g., ${DW_JDBC_USER}, ${DW_JDBC_PASS}, ${META_API_AK}, ${META_API_SK}, ${WEB_META_TOKEN}, keytab paths like /etc/security/keytabs/dw_user.keytab) even though the skill registry declares no required env vars. The skill also instructs generating runnable code referencing absolute production-like paths (/data/scripts/...), and mandates inlining conventions and implementation details into generated task code. While these are plausible for the stated purpose, they expand the agent's runtime scope to access credentials, network endpoints, and potentially local files — so review and restrict what the agent is allowed to request or receive.
Install Mechanism
Instruction-only skill with no install spec and no bundled code to execute. This is low-risk from installation perspective (nothing downloaded or written during install).
!
Credentials
Although requiring DB/API tokens is proportionate to metadata collection for a DW copilot, the skill registry lists no required environment variables while the resources reference many sensitive values. Examples found in files: DW_JDBC_USER, DW_JDBC_PASS, META_API_AK, META_API_SK, WEB_META_TOKEN, cookie strings, HDFS namenode and potential Kerberos keytab paths. The absence of declared required env vars is an incoherence (the skill can/should ask for credentials at runtime or declare them); users should be wary about providing secrets and prefer least-privilege, read-only credentials and out-of-band provisioning.
Persistence & Privilege
The skill does not request always:true and has no install-time actions or system-wide config changes. It is user-invocable and may run autonomously (default), which is expected. The skill does not attempt to modify other skills or agent system config in the provided files.
What to consider before installing
This skill appears coherent for generating Spec/Plan/Task artifacts for data warehouse work, but pay attention to these points before installing or using it: - Origin: the source is unknown and no homepage is provided. Prefer skills from known maintainers or inspect files closely. - Credentials: the templates reference many sensitive values (JDBC username/password, platform AK/SK, web tokens/cookies, Kerberos keytabs). The registry metadata declares no required env vars — expect the agent to ask for these at runtime. Do NOT paste production credentials directly into chat. Use least-privilege, read-only accounts or temporary/test credentials. - External connections: Phase 1 may instruct connecting to database hosts, metadata APIs, HDFS namenode, or web portals. Confirm network endpoints and scope before allowing connections; validate that any AK/SK or tokens are scoped and revocable. - Generated code: task.md examples include absolute paths (/data/scripts/...) and production-like settings. Review generated DDL/SQL and Azkaban configs before deploying; run outputs in an isolated/test environment first. - Secrets handling: ensure any credentials the skill requests are provided via secure, out-of-band mechanisms (agent environment variables or a secrets manager) rather than pasted into chat logs. If your platform supports declaring required env vars for the skill, insist they be declared and audited. - Confirm behavior: the skill enforces user confirmation points (Phase 3 and 5) which reduces autonomous risky actions, but you should verify the agent actually prompts and does not proceed without explicit approval. If you decide to use it: test with dummy datasets/accounts, restrict credential privileges, and review all generated scripts/configs before applying them to production.

Like a lobster shell, security has layers — review code before you run it.

latestvk974xpedh43w0dwcspj3rbwshs83z0ce
89downloads
1stars
1versions
Updated 3w ago
v0.0.1
MIT-0

DataWarehouse Copilot

基于 SDD(Spec-Driven Development)方法论,为数仓领域量身定制的规格驱动开发技能。

目录


七阶段工作流

用户需求(自然语言)
     ↓
[Phase 0] 需求澄清        ← 首先确认角色;主动提问消除模糊点
     ↓
[Phase 1] 元数据收集      ← 五种采集方式(见 metadata-config.md)
     ↓
[Phase 2] 生成 Spec       ← 输出 spec.md(做什么)
     ↓
[Phase 3] ⏸ 用户确认 Spec ← ⛔ MUST:等待用户明确 OK
     ↓
[Phase 4] 生成 Plan       ← 输出 plan.md(怎么做)
     ↓
[Phase 5] ⏸ 用户确认 Plan ← ⛔ MUST:等待用户明确 OK
     ↓
[Phase 6] 生成 Task       ← 输出 task.md(可执行代码)

角色说明(Phase 0 首要任务):

角色输出产物
📋 产品/业务同学spec.md + plan.md
🔧 数仓开发同学spec.md + plan.md + task.md

角色不明确时直接询问,不默认。

各阶段详细进入/退出条件及操作要点见 resources/workflow-stages.md(必读)。

三层文档职责:spec = 做什么(业务视角),plan = 怎么做(技术方案),task = 可执行落地(完整代码)。


产物规范

产物定位模板适用角色
spec.md业务需求规格,业务同学可直接阅读确认resources/spec-template.md所有角色
plan.md技术方案层,是 task 的直接输入resources/plan-template.md所有角色
task.md可执行落地,含完整代码、异常处理与验收标准(DoD)resources/task-template.md仅数仓开发同学

行为准则

MUST(不可违反)

  • 阶段顺序不可跳过:必须按 Spec → Plan → Task 顺序推进,不可乱序
  • 确认点不可绕过:Phase 3 和 Phase 5 必须收到用户明确回复才能继续,不可自行推断用户已同意;Spec 变更回到 Phase 2,Plan 变更回到 Phase 4

一般准则

  • 遇到模糊主动问:需求或字段语义不清时直接询问,不猜测、不假设
  • 代码可追溯:所有 DDL/ETL 必须能追溯到 Spec 中的对应条目
  • 规范缺失时主动询问并更新:公约文件未明确记录的能力或规范,必须询问用户确认,不得臆想;确认后补充到对应公约文件(平台能力 → platform-conventions.md,团队约定 → project-conventions.md

资源文件索引

  • resources/workflow-stages.md — 各阶段详细进入/退出条件与操作要点(必读
  • resources/spec-template.md — spec.md 模板
  • resources/plan-template.md — plan.md 模板
  • resources/task-template.md — task.md 模板
  • resources/conventions/metadata-config.md — 元数据五种采集方式完整配置
  • resources/conventions/project-conventions.md — 团队公约(代码生成前必须查阅)
  • resources/conventions/platform-conventions.md — 平台能力与配置规范(代码生成前必须查阅)

Comments

Loading comments...