URL Fetcher
ReviewAudited by ClawScan on May 10, 2026.
Overview
This skill mostly does what it says, but its safety checks are weaker than advertised and could let an agent fetch internal network pages or overwrite broad home-directory files.
Only use this skill with trusted public URLs, and prefer saving output inside a dedicated workspace path. Do not rely on its current internal-network blocking for SSRF protection, and avoid letting the agent choose arbitrary home-directory output files without review.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
An agent or task using this tool could fetch private network or cloud metadata URLs despite the skill saying internal URLs are blocked.
The URL filter only blocks a few exact hostnames/IPs before calling urlopen. It does not show private-range, link-local, cloud metadata, DNS-resolution, or redirect-target validation, so the advertised internal-network boundary is not fully enforced.
blocked_hosts = ['localhost', '127.0.0.1', '::1', '0.0.0.0', '0.0.0.0']
if parsed.hostname in blocked_hosts or parsed.hostname == '0.0.0.0':
return {"error": "Internal/localhost URLs blocked"}
...
response = urlopen(req, timeout=timeout)Resolve and validate host IPs, reject loopback/private/link-local/reserved/metadata ranges, validate after redirects, and consider requiring explicit user approval for non-public destinations.
If the agent chooses a bad output path, fetched web content could overwrite important home-directory, configuration, or OpenClaw-related files.
The output path policy allows writes anywhere under the user's home directory and /tmp, then writes content without an overwrite prompt. The blocked-path list only excludes a few substrings.
SAFE_PATHS = [
Path.home() / ".openclaw" / "workspace",
Path.home(),
Path("/tmp")
]
...
Path(output_file).write_text(content)Default writes to a dedicated workspace, deny hidden/config directories, use robust path containment checks, and require confirmation before overwriting existing files or writing outside the workspace.
A user may rely on the stated protections and allow the skill to handle URLs or output paths that are riskier than the implementation actually contains.
The documentation presents strong safety claims. The source only blocks a small exact-host list and allows broad home-directory writes, so users may overestimate the protection.
URL validation - Blocks localhost/internal networks ... Path validation - Safe file writes only (workspace, home, /tmp) ... Security: File writes use is_safe_path() to prevent malicious writes.
Either narrow the claims to match the implementation or strengthen the implementation so the documented protections are accurate.
