Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Rss Ai Reader

v1.0.0

📰 RSS AI 阅读器 — 自动抓取订阅、LLM生成摘要、多渠道推送! 支持 Claude/OpenAI 生成中文摘要,推送到飞书/Telegram/Email。 触发条件: 用户要求订阅RSS、监控博客、抓取新闻、生成摘要、设置定时抓取、 "帮我订阅"、"监控这个网站"、"每天推送新闻"、RSS/Atom feed 相关。

14· 6.1k·32 current·34 all-time
byBENZEMA@benzema216
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The described capability (fetch RSS, summarize with Claude/OpenAI, push to Feishu/Telegram/Email) matches the instructions and config examples. However, the skill metadata declares no required environment variables or credentials, while the SKILL.md and config_guide explicitly expect LLM API keys and push-channel secrets (ANTHROPIC_API_KEY / OPENAI_API_KEY, FEISHU_WEBHOOK, TELEGRAM_BOT_TOKEN, EMAIL_PASSWORD). That discrepancy (metadata says none required but instructions require multiple secrets) is a mismatch that reduces trust.
!
Instruction Scope
The SKILL.md directs the agent to run shell commands that clone a third-party GitHub repo and execute python main.py after installing requirements. These instructions cause the agent (or the user following them) to download and execute arbitrary code, install Python packages, and supply API keys — actions that go beyond simple, self-contained instruction text and enable arbitrary network access and data handling by external code.
!
Install Mechanism
There is no formal install spec inside the skill, but the runtime instructions instruct cloning https://github.com/BENZEMA216/rss-reader.git and running pip install -r requirements.txt. That means code and dependencies will be pulled from an external repo and from PyPI at runtime — a higher-risk install flow because the skill bundle itself does not include or vet that code.
!
Credentials
The SKILL.md and config guide expect several sensitive environment variables and secrets (Anthropic/OpenAI API keys, Feishu webhook, Telegram bot token & chat id, email SMTP credentials). The skill metadata declared none. Requiring multiple credentials is proportionate to the claimed push/LLM functionality, but the absence of these declared requirements in metadata is an incoherence and hides the fact that secrets must be provided. Supplying these secrets to code downloaded at runtime increases risk of accidental exfiltration.
Persistence & Privilege
The skill is not marked always:true and does not request system-wide config paths in metadata. Autonomous invocation (disable-model-invocation: false) is the platform default and not a standalone concern. However, because the instructions cause external code to be executed, that code could create persistent state — the metadata does not document any such persistence.
What to consider before installing
This skill's functionality is plausible, but it asks you (via instructions) to clone and run a third‑party GitHub project and to provide multiple sensitive keys/webhooks — while the skill metadata claims no required env vars. Before installing or using it: 1) inspect the referenced GitHub repo and its main.py/requirements to confirm behavior and check for network/exfiltration code; 2) prefer the skill bundle to include audited code or a trusted release URL rather than an arbitrary clone; 3) avoid reusing high‑privilege API keys (use keys with minimal scopes or dedicated service accounts); 4) run the code in a sandbox/container or review dependency versions in requirements.txt; 5) if you cannot audit the repo, do not provide production secrets. If the publisher supplies the included code directly in the skill (or documents a vetted release and explicitly lists required env vars in metadata), my concern level would drop.

Like a lobster shell, security has layers — review code before you run it.

latestvk9725knn1bt19mbct92dmpvphh80cztf

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments