Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

article-collect

v1.0.0

This is a simple skill for article recording, collect URLs as article, and provide users with query, delete, and other capabilities.

0· 110·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for bondli/article-collect.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "article-collect" (bondli/article-collect) from ClawHub.
Skill page: https://clawhub.ai/bondli/article-collect
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install article-collect

ClawHub CLI

Package manager switcher

npx clawhub@latest install article-collect
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the code: the skill saves URL+summary, lists and deletes entries stored in a JSON file. The use of puppeteer to scrape titles is reasonable for the stated purpose. However the dependency on @bondli-skills/shared (getBrowser) is notable: a shared module controlling the browser connection increases trust surface and isn't explained in SKILL.md or README.
Instruction Scope
SKILL.md instructs the agent to invoke node dist/index.js for add/list/delete actions and to only call add_article for mp.weixin.qq.com domains (otherwise use a built-in browser). The runtime code implements add/list/delete and scrapes via getBrowser+puppeteer. There's a mild mismatch: SKILL.md suggests 'built-in browser' for non-weixin URLs, but the skill's scraping relies on getBrowser from the shared package when add_article is invoked. The instructions do not request unrelated files or credentials.
Install Mechanism
No install spec (instruction-only) — lowest installation risk in itself. However package.json declares heavy dependencies (puppeteer, puppeteer-core) which will download Chromium when installed, and the project depends on @bondli-skills/shared. Because there is no install script declared here, installing the package into an environment would pull external code and binaries; that step is not automated by the skill manifest but remains a risk if performed.
Credentials
The skill declares no required env vars or credentials and only reads process.env.HOME to locate its JSON file. But the external @bondli-skills/shared module (getBrowser) could require or use environment variables or remote connection details not declared here — that increases ambiguity about what secrets or endpoints might be involved.
Persistence & Privilege
The skill writes its own JSON database to HOME/openclaw-skill-data/article-knowledge.json and does not request always:true or system-wide changes. File writes are limited to its own data path; this is expected for a local article-collecting skill.
What to consider before installing
This skill appears to do what it says (save/list/delete article URLs) but has two items you should resolve before installing or running it in a production environment: 1) Inspect @bondli-skills/shared (getBrowser) before use. The shared browser module controls how the browser is launched/connected. It could launch a local Chromium, connect to a remote browser, or embed credentials/telemetry. Ask the author for the source or examine the package source to confirm it doesn't send visited page content to an external service. 2) Be aware installing dependencies (puppeteer / puppeteer-core) will download Chromium binaries. If you plan to run the skill, do so in an isolated environment (container/VM) and review network access. The skill writes data to ~/openclaw-skill-data/article-knowledge.json — if that location is acceptable, the file writes are proportional. If you cannot inspect the shared module source, treat this skill as untrusted: run it only in a sandbox, or request a version that removes the opaque dependency by providing a clear, local getBrowser implementation.
dist/index.js:5
Environment variable access combined with network send.
!
dist/index.js:19
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk979najrm325z439k0t2hcxbhx83hafa
110downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Article Collect Skill

这是一个简单的将用户提供的url转化成文章记录的技能,用于收集网址成为文章,同时提供给用户查询,删除等能力

使用方式

运行:

node dist/index.js <action> <content>

action 类型

add_article 保存文章(url + 摘要)

示例: node dist/index.js add_article "https://example.com"


list_article 查看文章列表

示例: node dist/index.js list_article


delete_article 删除文章记录

示例: node dist/index.js delete_article 3


Agent 调用规则

如果用户发送一个 URL

步骤: 1、判断url域名是否mp.weixin.qq.com; 2、如果是,则调用:node dist/index.js add_article "URL"; 3、如果不是,调用内置浏览器进行访问;


如果用户说:

  • 查询我之前记录的url/文章
  • 文章记录
  • 查看我的文章记录
  • 查看我的文章
  • 查看我收藏的url

步骤: 1、调用:node dist/index.js list_article,获取数据 2、如有数据,需要格式化输出,每条记录支持点击,跳转到对应的url 3、如果没有数据,提示:暂无知识记录


如果用户说:

删除第三个url/文章

调用:

node dist/index.js delete_article 3

Comments

Loading comments...