Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

web info skill

v1.0.0

Extract and display useful information from web pages including title, meta description, headers, and links.

0· 28·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name, description, and required binary (curl) align with the bundled bash script: a lightweight HTML extractor that pulls title, headers, links, images, and stats.
!
Instruction Scope
SKILL.md claims 'Follows robots.txt directives' and 'Only fetches publicly accessible pages', but web-info.sh performs a straight curl on any http(s) URL provided and contains no robots.txt checks or host access restrictions. That mismatch could allow fetching internal or non-public endpoints (SSRF-like risks).
Install Mechanism
Instruction-only with a small bash script; no install spec or remote downloads. No files are written to disk beyond running the script in-memory—low install risk.
Credentials
No environment variables, credentials, or config paths are requested. The requested surface (curl only) is proportionate to the stated function.
Persistence & Privilege
Skill is not always-on and is user-invocable; it does not request elevated privileges or modify other skills or system-wide configs.
What to consider before installing
The code appears to do what the README and description say, but the documentation overstates safety guarantees. Before installing or enabling: 1) note that the script does not honor robots.txt or restrict hosts — it will curl any http(s) URL you pass (including internal addresses like 127.0.0.1 or intranet hosts), which can be abused for SSRF or to access non-public resources; 2) review and run the script in a sandboxed environment or with network egress restrictions if you want to limit exposure; 3) if you need robots.txt compliance or host allowlists, add explicit checks (fetch and parse robots.txt, validate hostname/IP ranges) or reject non-public hosts; 4) be aware output may include sensitive content from fetched pages; 5) if you want stronger guarantees, ask the publisher to remove the misleading privacy/security claims or to implement robots.txt and host restrictions.

Like a lobster shell, security has layers — review code before you run it.

latestvk977486fck4d6b24pj3sc3ysmn8439t9

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🌐 Clawdis
Binscurl

Comments