Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

RSS-Brew

v0.1.0

Run and operate the RSS-Brew digest pipeline, including app CLI usage, dry-runs, latest-run inspection, delivery status updates, and retry/finalize-aware ope...

0· 99·0 current·0 all-time
byYuhao Zhou@sunsetchow

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sunsetchow/rss-brew.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "RSS-Brew" (sunsetchow/rss-brew) from ClawHub.
Skill page: https://clawhub.ai/sunsetchow/rss-brew
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install rss-brew

ClawHub CLI

Package manager switcher

npx clawhub@latest install rss-brew
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill name/description match the code: it's an RSS digest pipeline with fetch/score/analyze/render/deliver phases. However, the registry metadata declares no required environment variables or primary credential while the code and README expect DEEPSEEK_API_KEY (required for LLM scoring) and optionally TAVILY_API_KEY. That mismatch between declared requirements and real code is an incoherence — the requested credentials are related to the purpose but were not declared.
Instruction Scope
SKILL.md directs the agent to run the app CLI from the skill workspace and points to a data-root under /root/workplace/2 Areas/rss-brew-data. The CLI delegates to legacy scripts which perform network fetches (RSS feeds) and call external LLM/context enrichment APIs. The instructions also encourage using a skill-local venv. The runtime actions (network calls, writing run-records/digests to the data root) are consistent with the stated purpose, but the SKILL.md does not call out the need for API keys or describe external endpoints explicitly.
Install Mechanism
There is no install spec (instruction-only), which reduces installer risk. The bundle includes full source, README, requirements.txt and a pyproject declaring openai as a dependency — but no automated install step. This is coherent but means the operator must install dependencies manually; nothing is downloaded at install time by the skill itself.
!
Credentials
The code requires sensitive environment variables (DEEPSEEK_API_KEY is enforced by phase_a_score; TAVILY_API_KEY is listed in README and referenced elsewhere) but the skill metadata lists none. The package uses an OpenAI-compatible client (openai.OpenAI) and allows overriding DEEPSEEK_BASE_URL, so API keys and network access are necessary. Requesting/using API keys is proportionate to the functionality, but failing to declare them in the skill manifest is a transparency issue and could lead to accidental credential exposure if a user supplies keys without realizing their use.
Persistence & Privilege
always is false and there are no indications the skill force-enables itself or modifies other skills. The CLI writes to the provided data-root (run records, digests), which is expected for this application. No skill-wide privilege escalation was detected.
What to consider before installing
This package appears to implement the RSS-Brew pipeline described, but there is an important mismatch you should address before installing: the code and README require LLM/context API keys (DEEPSEEK_API_KEY and optionally TAVILY_API_KEY), yet the skill metadata declares no required environment variables. Practical steps and cautions: - Do not supply API keys unless you trust the code and the external services (DeepSeek/Tavily). Review the phase_a_score and Tavily client files to confirm where keys are used and what endpoints are contacted. - The CLI will run Python scripts that fetch arbitrary RSS URLs and call external LLM/context APIs and will write run artifacts to the data-root (default: /root/workplace/2 Areas/rss-brew-data). Point data-root to an isolated directory if the default might contain sensitive data. - The skill bundle includes a pyproject/requirements but no automated install; create and use the recommended venv in the skill directory and install dependencies before running. The CLI prefers /root/.openclaw/.../venv/bin/python; if that venv is missing it will fall back to system Python which may lack dependencies and could alter behavior. - If you only want to inspect behavior, use dry-run and the '--mock' flags where available to avoid outbound LLM calls and to exercise the pipeline without sending data to third-party APIs. - If you need more assurance, perform a code review of the included scripts (phase_a_score, phase_b_analyze, tavily_client, fetch_rss) and run in a network-restricted environment or sandbox to observe outgoing connections. Given the undisclosed requirement for API keys in the manifest, treat this skill as suspicious until you confirm and control the external credentials and endpoints.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fk87cpxdwnkqxqb2a0kqqw983krpn
99downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

RSS-Brew Skill

Use this skill when you need to:

  • run the RSS-Brew pipeline
  • inspect the latest run
  • do a dry-run
  • update delivery status
  • understand the current migration/app-wrapper state
  • troubleshoot retry/finalize behavior

Current recommended entrypoint

Use the app CLI from the skill root:

cd /root/.openclaw/workspace/skills/rss-brew
export PYTHONPATH=app/src
python3 -m rss_brew.cli --help

Common commands:

python3 -m rss_brew.cli inspect latest --data-root '/root/workplace/2 Areas/rss-brew-data'
python3 -m rss_brew.cli dry-run --data-root '/root/workplace/2 Areas/rss-brew-data' --debug
python3 -m rss_brew.cli run --data-root '/root/workplace/2 Areas/rss-brew-data' --debug
python3 -m rss_brew.cli delivery update --data-root '/root/workplace/2 Areas/rss-brew-data' --status sent

Current architecture reality

RSS-Brew is in an in-place app-ification state:

  • app/ is the new app wrapper/container
  • app/src/rss_brew/cli.py is the new entrypoint
  • scripts/ still contain the runtime source-of-truth semantics
  • the app CLI intentionally delegates to legacy scripts in the current phase

Read these references as needed

  • Usage / commands: references/usage.md
  • Operational checks / failure modes: references/ops.md
  • Pipeline behavior / outputs: references/pipeline-spec.md
  • Retry / finalize architecture: references/retry-architecture.md
  • Migration / implementation status: docs/rss-brew-implementation-plan.md

Current default data root

/root/workplace/2 Areas/rss-brew-data

Important note

This skill is no longer just a loose script bundle. It is now a working app-wrapper around the legacy RSS-Brew runtime. Prefer the app CLI for new operations while keeping legacy scripts intact during migration.

Comments

Loading comments...