Install
openclaw skills install shikamaru-web-fetchFetch a known URL and return the page as markdown, plain text, raw HTML, or a downloaded image file using the local fetch.js helper. Use this whenever the user gives or implies a specific URL and wants you to retrieve, inspect, quote, summarize, convert, or save that page or asset. Prefer this skill over web-search when discovery is not needed, including for docs pages, blog posts, raw HTML inspection, and direct image downloads.
openclaw skills install shikamaru-web-fetchUse this skill when the target URL is already known and the job is retrieval, not search.
Common cases:
If the user needs help finding the right page first, use web-search before this skill.
Run:
node ./fetch.js --url "https://example.com"
Optional flags:
--format markdown|text|html--timeout <seconds>--output <path> for image responsesDefault to markdown unless the user clearly wants something else.
markdown: best for readable docs, articles, and summarizationtext: best when the user wants the cleanest plain-text extractionhtml: best when inspecting source markup, metadata, links, embeds, or page structure--output <path>: use when the response is an image and you want a stable saved file path instead of a temp fileExamples:
node ./fetch.js --url "https://example.com/docs" --format markdown
node ./fetch.js --url "https://example.com/page" --format text --timeout 20
node ./fetch.js --url "https://example.com/page" --format html
node ./fetch.js --url "https://example.com/logo.png" --output /tmp/logo.png
The CLI prints a <web_fetch> block.
For text-like responses it includes:
titleurlmimeformatcontentFor image responses it includes:
titleurlmimeimageAfter fetching:
Keep these in mind while using the tool:
http:// URLs are tried as https:// first, then retried as plain HTTP if needed.If the fetch fails:
http:// or https://