OpenClaw Enterprise

ReviewAudited by ClawScan on May 9, 2026.

Overview

This skill appears aligned with its enterprise workflow purpose, but it requires cloud AI API keys and may send business prompts to OpenAI/Anthropic, with some publisher and dependency details users should verify.

Before installing, verify the publisher and source repository, use dedicated LLM API keys with spending limits, and avoid sending confidential enterprise data unless your organization approves OpenAI/Anthropic processing. If you run the included scripts in production, pin dependencies and keep human approval on procurement, finance, customer-credit, and compliance decisions.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Your OpenAI or Anthropic account may be used to process requests and may incur usage charges.

Why it was flagged

The skill requires LLM provider credentials. This is expected for its stated agent-reasoning purpose, but those keys authorize use of the user's provider account and billing quota.

Skill content
primaryEnv: OPENAI_API_KEY ... auth: type: Bearer Token ... env_var: OPENAI_API_KEY ... ANTHROPIC_API_KEY ... optional: true
Recommendation

Use dedicated API keys with spending limits, monitor usage, and revoke keys when the skill is no longer needed.

What this means

Business context you include in prompts may leave your local environment and be processed by OpenAI or Anthropic.

Why it was flagged

The skill discloses that user task content is sent to external LLM providers. This is purpose-aligned, but enterprise workflow prompts may contain sensitive business, customer, supplier, finance, or risk information.

Skill content
发送的数据:任务内容发送到OpenAI/Anthropic API进行处理
Recommendation

Avoid including secrets or regulated data unless your organization has approved provider terms, retention settings, and data-handling controls.

What this means

Users might underestimate data-sharing risk if they put confidential enterprise details into prompts.

Why it was flagged

The wording could be read as saying enterprise internal data is never collected, while the same section says user-provided task content is sent to LLM APIs. The safer interpretation is that the skill does not separately harvest internal data, but any internal data placed in a prompt may be transmitted to the provider.

Skill content
发送的数据:任务内容发送到OpenAI/Anthropic API进行处理 ... 不收集的数据:API密钥、企业内部数据、用户行为日志
Recommendation

Treat all prompt content and context as data that may be sent to the configured LLM provider, and update documentation to clarify this distinction.

What this means

It may be harder to confirm who maintains the skill and which website or repository is authoritative.

Why it was flagged

The metadata presents more than one author/publisher identity and website. This does not show malicious behavior, but it makes provenance verification more important before trusting the package with API keys or enterprise workflows.

Skill content
author: name: 秒技工作室 link: https://xiaping.coze.site ... publisher: name: "OpenClaw AI Team" website: https://openclaw.ai
Recommendation

Verify the publisher, homepage, and repository relationship before installing in an enterprise environment.

What this means

Different dependency versions could be installed over time, affecting behavior or security posture.

Why it was flagged

If users manually install the Python dependencies, the version ranges are not fully pinned. There is no install spec showing automatic installation, so this is a supply-chain hygiene note rather than evidence of unsafe execution.

Skill content
"pip": ["httpx>=0.27.0,<1.0.0", "fastapi>=0.115.0,<1.0.0", "uvicorn>=0.30.0,<1.0.0", "langgraph>=0.2.0,<1.0.0"]
Recommendation

For production use, install from a reviewed lockfile or pin exact dependency versions in a controlled environment.