Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
RSS-Brew
v0.1.0Run and operate the RSS-Brew digest pipeline, including app CLI usage, dry-runs, latest-run inspection, delivery status updates, and retry/finalize-aware ope...
⭐ 0· 73·0 current·0 all-time
byYuhao Zhou@sunsetchow
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill name/description match the code: it's an RSS digest pipeline with fetch/score/analyze/render/deliver phases. However, the registry metadata declares no required environment variables or primary credential while the code and README expect DEEPSEEK_API_KEY (required for LLM scoring) and optionally TAVILY_API_KEY. That mismatch between declared requirements and real code is an incoherence — the requested credentials are related to the purpose but were not declared.
Instruction Scope
SKILL.md directs the agent to run the app CLI from the skill workspace and points to a data-root under /root/workplace/2 Areas/rss-brew-data. The CLI delegates to legacy scripts which perform network fetches (RSS feeds) and call external LLM/context enrichment APIs. The instructions also encourage using a skill-local venv. The runtime actions (network calls, writing run-records/digests to the data root) are consistent with the stated purpose, but the SKILL.md does not call out the need for API keys or describe external endpoints explicitly.
Install Mechanism
There is no install spec (instruction-only), which reduces installer risk. The bundle includes full source, README, requirements.txt and a pyproject declaring openai as a dependency — but no automated install step. This is coherent but means the operator must install dependencies manually; nothing is downloaded at install time by the skill itself.
Credentials
The code requires sensitive environment variables (DEEPSEEK_API_KEY is enforced by phase_a_score; TAVILY_API_KEY is listed in README and referenced elsewhere) but the skill metadata lists none. The package uses an OpenAI-compatible client (openai.OpenAI) and allows overriding DEEPSEEK_BASE_URL, so API keys and network access are necessary. Requesting/using API keys is proportionate to the functionality, but failing to declare them in the skill manifest is a transparency issue and could lead to accidental credential exposure if a user supplies keys without realizing their use.
Persistence & Privilege
always is false and there are no indications the skill force-enables itself or modifies other skills. The CLI writes to the provided data-root (run records, digests), which is expected for this application. No skill-wide privilege escalation was detected.
What to consider before installing
This package appears to implement the RSS-Brew pipeline described, but there is an important mismatch you should address before installing: the code and README require LLM/context API keys (DEEPSEEK_API_KEY and optionally TAVILY_API_KEY), yet the skill metadata declares no required environment variables. Practical steps and cautions:
- Do not supply API keys unless you trust the code and the external services (DeepSeek/Tavily). Review the phase_a_score and Tavily client files to confirm where keys are used and what endpoints are contacted.
- The CLI will run Python scripts that fetch arbitrary RSS URLs and call external LLM/context APIs and will write run artifacts to the data-root (default: /root/workplace/2 Areas/rss-brew-data). Point data-root to an isolated directory if the default might contain sensitive data.
- The skill bundle includes a pyproject/requirements but no automated install; create and use the recommended venv in the skill directory and install dependencies before running. The CLI prefers /root/.openclaw/.../venv/bin/python; if that venv is missing it will fall back to system Python which may lack dependencies and could alter behavior.
- If you only want to inspect behavior, use dry-run and the '--mock' flags where available to avoid outbound LLM calls and to exercise the pipeline without sending data to third-party APIs.
- If you need more assurance, perform a code review of the included scripts (phase_a_score, phase_b_analyze, tavily_client, fetch_rss) and run in a network-restricted environment or sandbox to observe outgoing connections.
Given the undisclosed requirement for API keys in the manifest, treat this skill as suspicious until you confirm and control the external credentials and endpoints.Like a lobster shell, security has layers — review code before you run it.
latestvk97fk87cpxdwnkqxqb2a0kqqw983krpn
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
