Back to skill
Skillv1.0.0

ClawScan security

TeamWork · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousFeb 12, 2026, 7:24 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill's code and instructions largely match its team-management purpose, but there are inconsistencies around credential handling and a few design choices that increase risk and merit caution.
Guidance
This skill implements team orchestration and will create and modify files under .trae/config and .trae/data (providers.json, team-roles.json, model_scores.json). It will ask you for API keys or the names of environment variables for any providers you configure and will persist that provider info in .trae/config/providers.json. Before installing or using it: - Understand it will store provider credentials/config locally (likely plaintext). If you don’t want keys stored on disk, avoid entering them or use minimally privileged keys. - Prefer supplying environment variable names rather than pasting secrets into interactive prompts, but note the code appears not to resolve process.env itself — verify how your host resolves those placeholders. - Review and restrict file permissions on .trae/config (e.g., chmod 600) and keep backups out of shared repos. - If you need provider integrations (OpenAI, Anthropic, Google, etc.), confirm how provider calls will be performed (the skill’s code does not include HTTP calls) and whether the host agent will use stored keys. - Validate the author/source before providing sensitive credentials and consider testing in an isolated environment first. If you want, I can point out the exact lines where credentials are written to disk and suggest code changes to avoid plaintext storage (e.g., use OS credential stores or encrypt configs).

Review Dimensions

Purpose & Capability
noteThe skill's files (init, config-manager, team-coordinator, score-manager, herald, templates) implement team creation, role assignment, scoring, and config persistence — consistent with the stated purpose. However, the package contains no networking or provider-integration code (no HTTP/fetch/axios/etc.), so while it asks for provider API keys and base URLs the code doesn’t actually call external AI provider APIs. That could be a design choice (the host model performs calls) but it is an inconsistency worth noting.
Instruction Scope
okSKILL.md explicitly instructs the agent to read and write configuration under .trae/config and .trae/data and to interactively collect provider and model information. Those actions are within the scope of a multi-agent orchestration skill. The instructions do not request unrelated system files or hidden data exfiltration; they are explicit about the files they will touch.
Install Mechanism
okThere is no remote install/download step and package.json has no external dependencies — nothing will be pulled from arbitrary URLs. The skill is delivered as local code and templates; risk from the install mechanism is low.
Credentials
concernRegistry metadata declares no required environment variables, but SKILL.md and configuration templates ask the user to provide API keys (or environment variable names) for multiple AI providers and persist them in .trae/config/providers.json. The code uses provider.api_key fields but does not resolve process.env values anywhere, meaning secrets may end up stored in plaintext in config files. Storing API keys on disk without explicit guidance or encryption is a security concern. Requesting multiple provider keys is functionally justified, but the lack of a clear, secure handling strategy is disproportionate and risky.
Persistence & Privilege
okThe skill does persist configuration and score data to .trae/config and .trae/data, which is normal for this purpose. always:false and no modification of other skills' configurations are observed. Autonomous invocation is allowed (platform default); combined with stored credentials this increases blast radius, but the skill does not request elevated or system-wide privileges.