Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Web Summarizer
v1.0.0Fetch and summarize web pages for AI agents. Extract key information from URLs and return structured markdown summaries. No API key required.
⭐ 0· 40·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description match the delivered artifact: a small bash script that uses curl + python3 to fetch a page and produce an extractive markdown-style summary. However the SKILL.md claims it "respects robots.txt" and mentions a `web_fetch` tool; the included script does not check robots.txt and only uses curl. Minor mismatch between claimed features and actual implementation.
Instruction Scope
Runtime instructions are to run scripts/summarize.sh on arbitrary URLs. The script fetches remote content and prints summaries only (no external exfil endpoints). But it makes no attempt to enforce robots.txt, domain whitelists, or block internal addresses (e.g., 169.254.169.254), so an agent invoking this on untrusted input could be used to probe internal services (SSRF/IPMI/metadata endpoints). The URL validation is minimal (only checks for the substring "http").
Install Mechanism
No install spec; instruction-only + small script. Nothing is downloaded or written to disk at install time beyond the included script, which is low-risk.
Credentials
No environment variables, credentials, or config paths are requested. The script only needs curl and python3 which is proportionate to its purpose.
Persistence & Privilege
always:false and nothing writes system-level settings. Autonomous invocation (default allowed) combined with the ability to fetch arbitrary URLs increases the blast radius: an agent could fetch internal-only URLs if it runs this skill without additional controls.
What to consider before installing
This skill is mostly coherent with its description — it fetches pages with curl and does a simple extractive summary with python — but it has two practical concerns you should weigh before installing:
1) The SKILL.md claims the tool "respects robots.txt" but the shipped script does not check robots.txt. If respecting robots.txt is important to you, request an implementation change.
2) The script will fetch any URL you give it and performs minimal validation (only checks for the string "http"). If the agent is allowed to invoke the skill autonomously, a malicious prompt or compromised agent could cause it to fetch internal or sensitive endpoints (SSRF risk, cloud metadata endpoints, intranet services). Mitigations: require user confirmation before fetching arbitrary URLs, add domain whitelisting or explicit blacklist rules (e.g., block 169.254.169.254 and private IP ranges), implement proper URL validation, and add a robots.txt check if you expect to honor it. Also consider running the skill in a network-restricted environment or disabling autonomous invocation until safeguards are in place.Like a lobster shell, security has layers — review code before you run it.
latestvk970wjwxrq5ftwj95h26t3m9ks841hs8
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
