Back to skill
Skillv0.1.0

ClawScan security

Mcp Builder test · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousFeb 12, 2026, 11:31 AM
Verdict
suspicious
Confidence
high
Model
gpt-5-mini
Summary
The package is broadly what it claims (MCP server guidance + an evaluation harness) but several mismatches raise concerns: included runnable code expects external LLM and MCP libraries yet the skill declares no install steps or required credentials, and the evaluation harness will send tool inputs/outputs to an external LLM provider (Anthropic) without disclosing that in the manifest.
Guidance
This package contains helpful MCP server docs but also runnable evaluation scripts that will call an external LLM (Anthropic) and require Python MCP client libraries. Before installing or running: 1) Review scripts/evaluation.py and scripts/connections.py to understand what data will be sent to external services — the evaluation deliberately forwards tool inputs/outputs to Anthropic, so any data returned by your MCP server may be transmitted off-host. 2) Install dependencies in a controlled environment (the repo includes scripts/requirements.txt but no install spec). 3) Expect to provide an Anthropic API key (e.g., ANTHROPIC_API_KEY) and possibly other credentials when running evaluations — these are not declared in the skill metadata. 4) If you plan to evaluate sensitive systems, run the harness in an isolated environment or modify it to use a local LLM / disable external calls. 5) If you want to proceed, add explicit install and environment variable documentation (or patch the SKILL.md) and audit the code for any endpoints/telemetry you don't want to expose.

Review Dimensions

Purpose & Capability
concernThe name/description claim this is a guide for building MCP servers — the included reference docs and code align with that. However, the shipped scripts implement an evaluation harness that calls an external LLM (Anthropic) and requires the 'mcp' client libraries. The skill metadata declares no required env vars, binaries, or install steps despite code that needs external Python packages and an LLM API key. Requiring an LLM client and MCP runtime libraries is plausible for an evaluation tool, but the manifest/README do not declare these needs (mismatch between claimed purpose and undeclared runtime requirements).
Instruction Scope
concernSKILL.md and reference docs focus on building MCP servers (fine), but scripts/evaluation.py will forward tool usage, tool inputs, and tool outputs to the Anthropics API as part of the evaluation prompt (EVALUATION_PROMPT explicitly asks for tool inputs/outputs and summaries). That means potentially sensitive data returned by the MCP server (tool results) would be transmitted to an external LLM provider during evaluation. The SKILL.md does not explicitly warn that evaluation runs will send this data externally. The instructions also instruct use of WebFetch to remote docs and raw GitHub content, which is reasonable but implies outbound network access.
Install Mechanism
concernThe skill has no install spec, yet repository contains Python scripts and a scripts/requirements.txt implying dependencies (mcp client libraries, anthropic, httpx, etc.). Without an install mechanism, an agent or user would have to install dependencies manually. This is an incoherence between the deliverables (runnable code) and the declared install footprint (none). Lack of declared install steps increases the chance that code will fail or that a user will install packages ad-hoc from PyPI without guidance.
Credentials
concernThe code imports and instantiates an Anthropic client (Anthropic()) which typically requires an ANTHROPIC_API_KEY environment variable or similar credential, but the skill declares no required environment variables or primary credential. The connection helpers accept environment dicts and the evaluation harness will contact external endpoints. Requiring an LLM API key (and possibly other service credentials for target MCP servers) is proportionate to running an evaluation harness, but it is not declared in the metadata — a transparency gap and a risk of surprise credential usage.
Persistence & Privilege
okalways:false and no persistent installation steps are declared. The skill does not request permanent inclusion or attempt to modify other skills or system-wide agent settings. However, because the evaluation harness can be invoked autonomously and will call external services, that autonomous capability combined with the other concerns increases the blast radius — mentionable but not a configuration error by itself.