Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

web info skill

v1.0.0

Extract and display useful information from web pages including title, meta description, headers, and links.

0· 72·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sangjie123/web-info-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "web info skill" (sangjie123/web-info-skill) from ClawHub.
Skill page: https://clawhub.ai/sangjie123/web-info-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: curl
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install web-info-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install web-info-skill
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name, description, and required binary (curl) align with the bundled bash script: a lightweight HTML extractor that pulls title, headers, links, images, and stats.
!
Instruction Scope
SKILL.md claims 'Follows robots.txt directives' and 'Only fetches publicly accessible pages', but web-info.sh performs a straight curl on any http(s) URL provided and contains no robots.txt checks or host access restrictions. That mismatch could allow fetching internal or non-public endpoints (SSRF-like risks).
Install Mechanism
Instruction-only with a small bash script; no install spec or remote downloads. No files are written to disk beyond running the script in-memory—low install risk.
Credentials
No environment variables, credentials, or config paths are requested. The requested surface (curl only) is proportionate to the stated function.
Persistence & Privilege
Skill is not always-on and is user-invocable; it does not request elevated privileges or modify other skills or system-wide configs.
What to consider before installing
The code appears to do what the README and description say, but the documentation overstates safety guarantees. Before installing or enabling: 1) note that the script does not honor robots.txt or restrict hosts — it will curl any http(s) URL you pass (including internal addresses like 127.0.0.1 or intranet hosts), which can be abused for SSRF or to access non-public resources; 2) review and run the script in a sandboxed environment or with network egress restrictions if you want to limit exposure; 3) if you need robots.txt compliance or host allowlists, add explicit checks (fetch and parse robots.txt, validate hostname/IP ranges) or reject non-public hosts; 4) be aware output may include sensitive content from fetched pages; 5) if you want stronger guarantees, ask the publisher to remove the misleading privacy/security claims or to implement robots.txt and host restrictions.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🌐 Clawdis
Binscurl
latestvk977486fck4d6b24pj3sc3ysmn8439t9
72downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Web Info Extractor

A lightweight web scraping skill that extracts structured information from any webpage.

Features

  • Extract page title and meta description
  • List all headers (H1-H6)
  • Extract all links with their anchor text
  • Display images and their alt text
  • Show page word count
  • JSON output support for easy parsing

Usage

# Basic usage
web-info https://example.com

# Get JSON output
web-info --json https://example.com

# Extract only links
web-info --links-only https://example.com

# Extract only headers
web-info --headers-only https://example.com

Examples

Extract page info

web-info https://news.ycombinator.com

Get structured JSON data

web-info --json https://github.com > github-info.json

Find all links on a page

web-info --links-only https://example.com

Output Format

The skill provides clean, formatted output:

Title: Example Domain
Description: Example meta description
URL: https://example.com

Headers:
  H1: Example Domain
  H2: More information

Links:
  - Example Link (https://example.org)
  - Another Link (https://example.net)

Images:
  - logo.png (alt: "Company Logo")

Statistics:
  - Word count: 150
  - Links: 5
  - Images: 2

Requirements

  • curl (for fetching web pages)

Privacy & Security

  • Does not store any data
  • Only fetches publicly accessible pages
  • Follows robots.txt directives
  • No cookies or authentication stored

License

MIT-0 - Free to use, modify, and distribute

Comments

Loading comments...