Mcp Builder test

PassAudited by ClawScan on May 1, 2026.

Overview

This is a coherent MCP-building guide, with helper evaluation scripts that are purpose-aligned but should be run only against trusted or read-only MCP servers.

This skill appears safe as a guide. If you run the included evaluation scripts, use trusted MCP servers, sandbox or read-only credentials, and non-sensitive test data; be aware that tool outputs are sent to Claude and may appear in evaluation reports.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If the evaluated MCP server exposes write or destructive tools, the evaluation model could call them during a test.

Why it was flagged

The evaluation harness passes all listed MCP tools to the model and automatically executes model-requested tool calls. This is expected for an MCP evaluation harness, but it relies on the user running read-only evaluations or using safe credentials.

Skill content
tools = await connection.list_tools() ... while response.stop_reason == "tool_use": ... tool_result = await connection.call_tool(tool_name, tool_input)
Recommendation

Run evaluations against sandbox servers or read-only credentials, review the evaluation questions, and disable or filter write/destructive tools where possible.

What this means

Running the helper with an untrusted command could execute unwanted local code.

Why it was flagged

For stdio MCP servers, the helper can launch a user-supplied local command. That is normal for local MCP evaluation, but it means the command should be trusted.

Skill content
return stdio_client(StdioServerParameters(command=self.command, args=self.args, env=self.env))
Recommendation

Only pass commands for MCP servers you trust, and prefer a sandbox or test environment when evaluating new servers.

What this means

Over-scoped tokens or headers could give the evaluated MCP server more account access than intended.

Why it was flagged

The connection helpers can pass environment variables to stdio servers and HTTP headers to remote servers, which is a normal way to authenticate MCP integrations but can delegate account access.

Skill content
def __init__(self, command: str, args: list[str] = None, env: dict[str, str] = None) ... def __init__(self, url: str, headers: dict[str, str] = None)
Recommendation

Use least-privilege, temporary, or test credentials and avoid passing broad production tokens unless necessary.

What this means

Private data returned by MCP tools may be sent to Anthropic and summarized in the generated evaluation report.

Why it was flagged

Tool results from the evaluated MCP server are sent back into the Claude conversation for evaluation. This is central to the harness, but it can move service data across a model-provider boundary.

Skill content
messages.append({"role": "user", "content": [{"type": "tool_result", "tool_use_id": tool_use.id, "content": tool_response}]}) ... client.messages.create(... messages=messages, tools=tools)
Recommendation

Use non-sensitive test data where possible, confirm provider data-handling requirements, and review reports before sharing them.