WebClip Save & Summarize Web Pages
v1.0.0Fetch web pages, strip to clean readable text, summarize into agent-ready markdown. Research assistant foundation. No browser required.
⭐ 0· 369·3 current·3 all-time
byShadow Rose@theshadowrose
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Overall the code matches the described purpose (fetch, clean, convert, batch, save). Minor mismatch: SKILL.md/README advertise a caching feature (“Caching — don't re-fetch pages you've already clipped”), but the implementation creates a cache directory and a save() method without implementing a read/cache lookup in fetch(); so 'caching' is not actually performed before fetching.
Instruction Scope
Runtime instructions do exactly what is expected: fetch arbitrary URLs, strip HTML, produce markdown, and optionally save files locally. The code explicitly blocks internal/metadata IP address ranges and limits response size and redirects. It writes files to a local cacheDir (default './web-cache'), which is expected behavior for an offline archive feature.
Install Mechanism
No install spec and the code uses only Node built-ins (https/http/fs/path). No remote downloads or third-party packages are introduced, so installation risk is low.
Credentials
No environment variables, credentials, or external service tokens are requested. The skill's filesystem writes (cache/archive) are proportionate to its stated functionality.
Persistence & Privilege
always:false and the skill does not request persistent platform privileges or modify other skills. It can be invoked autonomously (default), which is normal — no additional privileged behavior observed.
Assessment
This skill appears coherent and does what it claims: fetch pages, remove junk, produce markdown, and save locally. Before installing or enabling it: 1) Review and (if needed) run the code in a sandboxed environment since it performs network fetches and writes files locally. 2) Note the advertised 'caching' behavior isn't implemented (fetch() always downloads); if you rely on caching, modify the code to check cacheDir. 3) save(filename) accepts a caller-supplied filename — consider restricting or sanitizing filenames to avoid path traversal (the code sanitizes generated slugs but will join any provided filename to cacheDir). 4) The fetcher blocks many internal IP ranges, limits redirects, and caps response size, which reduces SSRF/internal network risk, but you should still not expose this skill to untrusted agents or inputs. If you need stronger guarantees, run it in an isolated container, set cacheDir to a safe path, and add explicit filename validation and a real cache lookup.Like a lobster shell, security has layers — review code before you run it.
bookmarksvk9771mxt6r4db3p370s7ky8rmd82rq0xclippingvk9771mxt6r4db3p370s7ky8rmd82rq0xlatestvk9771mxt6r4db3p370s7ky8rmd82rq0xresearchvk9771mxt6r4db3p370s7ky8rmd82rq0xsavingvk9771mxt6r4db3p370s7ky8rmd82rq0xwebvk9771mxt6r4db3p370s7ky8rmd82rq0xweb-clipvk9771mxt6r4db3p370s7ky8rmd82rq0x
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
