Fetch

PassAudited by ClawScan on May 10, 2026.

Overview

Fetch is a coherent public-URL retrieval skill that stores results locally, with minor cautions around URL scope, local persistence, and large downloads.

This skill appears safe for its stated purpose of fetching public web pages. Use it only with public URLs, avoid very large downloads, and remember that fetched content is saved locally under the OpenClaw workspace memory path.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

You have less provenance context than with a skill linked to a known source repository or homepage.

Why it was flagged

The skill has limited external provenance metadata. This is mitigated by the provided full source and absence of external dependencies, but users cannot verify it against a public upstream project from these artifacts.

Skill content
Source: unknown
Homepage: none
Recommendation

Review the included scripts before use and prefer registry or publisher trust signals when available.

What this means

If given a localhost, private-network, or otherwise non-public HTTP(S) URL, the skill may fetch and store content that was outside the intended public-web scope.

Why it was flagged

The code restricts URLs to HTTP(S), but it does not technically verify that the target is public. The documented public-only boundary therefore depends on the agent/user selecting appropriate URLs.

Skill content
if parsed.scheme not in ("http", "https"):
    print("Only http/https URLs are allowed.")
    sys.exit(1)
...
with urlopen(req, timeout=20) as resp:
Recommendation

Use only public URLs. A stronger implementation should block localhost, private/link-local IP ranges, and re-check redirect targets.

NoteHigh Confidence
ASI08: Cascading Failures
What this means

A very large response could consume memory or disk space in the local OpenClaw workspace.

Why it was flagged

The script reads the entire response and writes it locally without an explicit response-size limit.

Skill content
raw_bytes = resp.read()
...
with open(raw_path, "w", encoding="utf-8") as f:
    f.write(raw_html)
Recommendation

Fetch reasonably sized pages and consider adding maximum byte limits and content-type checks.

What this means

Untrusted web content may remain in local history after the task and could be surfaced in later work if the user or agent reopens it.

Why it was flagged

Fetched web content and job metadata are intentionally persisted in an OpenClaw memory path. This matches the stated purpose, but persisted web text can later be re-read or reused.

Skill content
Save both the raw response and structured extraction locally.
...
All data is stored under:
- `~/.openclaw/workspace/memory/fetch/jobs.json`
- `~/.openclaw/workspace/memory/fetch/pages/`
Recommendation

Treat fetched content as untrusted, review it before relying on it, and clear the local fetch history if persistence is not desired.