Emily Web Fetch
PassAudited by ClawScan on May 1, 2026.
Overview
This appears to be a simple webpage-fetching skill, with expected risks from fetching arbitrary web content that should be treated as untrusted.
This skill is reasonable for fetching public static webpages. Before installing, be aware that pages fetched by the agent are untrusted, and do not use it on private/internal URLs unless you want that content available in the conversation.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A malicious or misleading webpage could include instructions that the agent might mistake for task guidance.
The skill intentionally returns webpage content for the agent to analyze. Webpages are untrusted and may contain text that attempts to influence the agent.
助手调用此工具获取网页内容,然后进行分析、摘要或提取信息。
Treat fetched page content as untrusted data and use it only for the user-requested summary, extraction, or analysis.
If asked to fetch private, internal, or sensitive URLs, the agent may retrieve information the user did not intend to bring into the conversation.
The tool requests the caller-supplied URL without a host allowlist. This is expected for a general web-fetch tool, but it means the tool may contact any HTTP endpoint reachable from its runtime.
fetch: async (url) => { ... const protocol = isHttps ? https : http; ... protocol.get(url, options, (res) => {Only fetch URLs you intend the agent to access, and avoid private/internal endpoints unless that is explicitly desired.
Fetching an unexpectedly large page could waste resources despite the returned output being shortened.
The 5000-character limit is applied after the full response body is accumulated in memory, so very large responses can still consume bandwidth or memory before being truncated.
let data = ''; res.on('data', chunk => data += chunk); res.on('end', () => { if (data.length > 5000) { data = data.substring(0, 5000)Prefer known webpage URLs and avoid using the tool for large downloads or untrusted file-like endpoints.
