Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Mcp Builder test
v0.1.0Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
⭐ 0· 781·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The name/description claim this is a guide for building MCP servers — the included reference docs and code align with that. However, the shipped scripts implement an evaluation harness that calls an external LLM (Anthropic) and requires the 'mcp' client libraries. The skill metadata declares no required env vars, binaries, or install steps despite code that needs external Python packages and an LLM API key. Requiring an LLM client and MCP runtime libraries is plausible for an evaluation tool, but the manifest/README do not declare these needs (mismatch between claimed purpose and undeclared runtime requirements).
Instruction Scope
SKILL.md and reference docs focus on building MCP servers (fine), but scripts/evaluation.py will forward tool usage, tool inputs, and tool outputs to the Anthropics API as part of the evaluation prompt (EVALUATION_PROMPT explicitly asks for tool inputs/outputs and summaries). That means potentially sensitive data returned by the MCP server (tool results) would be transmitted to an external LLM provider during evaluation. The SKILL.md does not explicitly warn that evaluation runs will send this data externally. The instructions also instruct use of WebFetch to remote docs and raw GitHub content, which is reasonable but implies outbound network access.
Install Mechanism
The skill has no install spec, yet repository contains Python scripts and a scripts/requirements.txt implying dependencies (mcp client libraries, anthropic, httpx, etc.). Without an install mechanism, an agent or user would have to install dependencies manually. This is an incoherence between the deliverables (runnable code) and the declared install footprint (none). Lack of declared install steps increases the chance that code will fail or that a user will install packages ad-hoc from PyPI without guidance.
Credentials
The code imports and instantiates an Anthropic client (Anthropic()) which typically requires an ANTHROPIC_API_KEY environment variable or similar credential, but the skill declares no required environment variables or primary credential. The connection helpers accept environment dicts and the evaluation harness will contact external endpoints. Requiring an LLM API key (and possibly other service credentials for target MCP servers) is proportionate to running an evaluation harness, but it is not declared in the metadata — a transparency gap and a risk of surprise credential usage.
Persistence & Privilege
always:false and no persistent installation steps are declared. The skill does not request permanent inclusion or attempt to modify other skills or system-wide agent settings. However, because the evaluation harness can be invoked autonomously and will call external services, that autonomous capability combined with the other concerns increases the blast radius — mentionable but not a configuration error by itself.
What to consider before installing
This package contains helpful MCP server docs but also runnable evaluation scripts that will call an external LLM (Anthropic) and require Python MCP client libraries. Before installing or running: 1) Review scripts/evaluation.py and scripts/connections.py to understand what data will be sent to external services — the evaluation deliberately forwards tool inputs/outputs to Anthropic, so any data returned by your MCP server may be transmitted off-host. 2) Install dependencies in a controlled environment (the repo includes scripts/requirements.txt but no install spec). 3) Expect to provide an Anthropic API key (e.g., ANTHROPIC_API_KEY) and possibly other credentials when running evaluations — these are not declared in the skill metadata. 4) If you plan to evaluate sensitive systems, run the harness in an isolated environment or modify it to use a local LLM / disable external calls. 5) If you want to proceed, add explicit install and environment variable documentation (or patch the SKILL.md) and audit the code for any endpoints/telemetry you don't want to expose.Like a lobster shell, security has layers — review code before you run it.
latestvk9709nbnggckpxj7vabts2cg458110n0
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
