Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Clean Content Fetch

v1.0.5

获取干净、可读的网页正文内容,适合现代网页、博客、新闻、公告和微信公众号文章抓取;支持网页正文提取、内容清洗、去噪、Markdown 输出,适用于普通 fetch 效果不佳、页面噪音较多或动态渲染干扰的场景。Clean content fetch for modern web pages, article ext...

2· 694·6 current·7 all-time
by晨冬@jllyzzd·duplicate of @jllyzzd2023/clean-web-fetch

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jllyzzd/clean-content-fetch.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Clean Content Fetch" (jllyzzd/clean-content-fetch) from ClawHub.
Skill page: https://clawhub.ai/jllyzzd/clean-content-fetch
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install jllyzzd/clean-content-fetch

ClawHub CLI

Package manager switcher

npx clawhub@latest install clean-content-fetch
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The name/description claim a content-extraction tool that runs a Python pipeline (scrapling + html2text + optional Playwright). That purpose would legitimately need the referenced scripts and possibly those dependencies. However, the package contains only reference docs and no scripts (e.g., scripts/scrapling_fetch.py is referenced in SKILL.md but not present). This mismatch means the skill as delivered cannot perform its stated function without fetching or relying on external code.
!
Instruction Scope
SKILL.md gives concrete runtime instructions (run python3 scripts/scrapling_fetch.py <url> <max_chars>, install packages, optionally use playwright) which are narrowly scoped to fetching and cleaning public webpages. Those instructions do not ask for unrelated system files or credentials. The problem is they direct execution of a script that is not included; if an agent attempted to follow them it would need to obtain or install code from elsewhere, which is not documented here and increases risk.
Install Mechanism
There is no install spec and no binaries packaged. That keeps the skill low-risk from an automatic-install perspective. The SKILL.md recommends pip installs and playwright browser installation — standard for this functionality — but these are manual recommendations, not an automated install step included in the package.
Credentials
The skill requests no environment variables, no credentials, and no config-path access. The declared dependencies (scrapling, html2text, curl_cffi, playwright, browserforge) align with web fetching and rendering. Nothing in the description asks for unrelated secrets or system access.
Persistence & Privilege
The skill is user-invocable, not always-on, and does not request to modify other skills or persist configuration. Autonomous invocation is allowed by default but is not combined with any other high-risk factor here.
What to consider before installing
This skill's README-like instructions expect a scripts/ directory and a script named scripts/scrapling_fetch.py, but those scripts are not included in the package. Before installing or running anything: 1) Ask the publisher for the missing script files or an authoritative source (git repo or release) and verify their contents. 2) Never pip-install packages system-wide for unknown code — use an isolated virtual environment or container. 3) Inspect any fetched scripts for network calls, hidden endpoints, or code that exfiltrates data before running; Pay special attention to code that uses browser automation (Playwright) because it will load remote pages and may execute page JS. 4) If you must run this, do so in a sandbox (container/VM) and avoid supplying credentials. 5) If the runtime environment already provides the referenced scripts, review them the same way — the absence of included code is the primary incoherence and increases risk of pulling code from unverified locations.

Like a lobster shell, security has layers — review code before you run it.

latestvk97334sfn5rv0xn1yx10y40y7s82kt92
694downloads
2stars
6versions
Updated 6h ago
v1.0.5
MIT-0

Scrapling Web Fetch

当用户要获取网页内容、正文提取、把网页转成 markdown/text、抓取文章主体时,优先使用此技能。

默认流程

  1. 使用 python3 scripts/scrapling_fetch.py <url> <max_chars>
  2. 默认正文选择器优先级:
    • article
    • main
    • .post-content
    • [class*="body"]
  3. 命中正文后,使用 html2text 转 Markdown
  4. 若都未命中,回退到 body
  5. 最终按 max_chars 截断输出

用法

python3 scripts/scrapling_fetch.py <url> 30000

依赖

常见依赖包括:

  • scrapling
  • html2text
  • curl_cffi
  • playwright
  • browserforge

建议在隔离环境中安装依赖,再运行脚本。若宿主环境限制系统级 pip 安装,可使用项目级虚拟环境。

示例:

python3 -m venv .venv
. .venv/bin/activate
pip install scrapling html2text curl_cffi playwright browserforge
python -m playwright install chromium
python scripts/scrapling_fetch.py <url> 30000

输出约定

脚本默认输出 Markdown 正文内容。 如需结构化输出,可追加 --json。 如需调试提取命中了哪个 selector,可查看 stderr 输出。

附加资源

  • 用法参考:references/usage.md
  • 选择器策略:references/selectors.md
  • 统一入口:scripts/fetch-web-content

何时用这个技能

  • 获取文章正文
  • 抓博客/新闻/公告正文
  • 将网页转成 Markdown 供后续总结
  • 常规 fetch 效果差,希望提升现代网页抓取稳定性
  • 抓小红书分享短链或笔记落地页正文

小红书抓取方法

对于 xhslink.com 短链或小红书笔记页,可直接运行:

python3 scripts/scrapling_fetch.py 'http://xhslink.com/o/9745hugimlD' 30000

说明:

  • 脚本会先解析短链并抓取落地页正文
  • 适合提取小红书笔记文案、标题和主体内容
  • 若页面需要更复杂交互,再切到浏览器自动化

安全边界

  • 仅用于抓取公开网页的正文内容与可读文本
  • 不用于登录后页面、私有数据、受限资源或绕过权限控制
  • 若目标页面需要账号登录、点击授权、滚动交互或复杂会话状态,应改用浏览器自动化并在明确授权下执行

何时不用

  • 需要完整浏览器交互、点击、登录、翻页时:改用浏览器自动化
  • 只是简单获取 API JSON:直接请求 API 更合适

Comments

Loading comments...