RSS News Aggregator

v1.0.0

Aggregate and filter multiple RSS feeds to fetch, summarize, deduplicate, and monitor news articles by keywords and sources.

0· 42·0 current·0 all-time
byLv Lancer@kaiyuelv

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kaiyuelv/rss-news-aggregator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "RSS News Aggregator" (kaiyuelv/rss-news-aggregator) from ClawHub.
Skill page: https://clawhub.ai/kaiyuelv/rss-news-aggregator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install rss-news-aggregator

ClawHub CLI

Package manager switcher

npx clawhub@latest install rss-news-aggregator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (RSS aggregation, filtering, summarization) matches the package contents: Python code implements feed fetching (feedparser/requests), summary extraction (html2text), filtering, deduplication and report generation. Declared deps align with implementation and no unrelated credentials or binaries are requested.
Instruction Scope
SKILL.md and README show only feed management, fetching, filtering and report generation. Runtime instructions do not ask the agent to read arbitrary local files, environment secrets, or post data to unexpected endpoints. The engine performs network fetches of RSS URLs (user-provided and built-in), which is expected behavior.
Install Mechanism
There is no automated install spec (instruction-only). A requirements.txt exists and README suggests pip install -r requirements.txt; dependencies are standard public Python packages. No downloads from untrusted URLs or archive extraction are present.
Credentials
The skill does not declare or read any environment variables or credentials. The code does not access OS config paths or secret-named variables. No disproportionate access to external services is requested.
Persistence & Privilege
Skill is not forced-always (always: false). It is user-invocable and allows autonomous invocation by default (platform normal). The skill does not attempt to modify other skills or system-wide agent settings.
Assessment
This package appears to do exactly what it claims: fetch RSS/Atom feeds, extract summaries, filter, deduplicate, and produce reports. Things to consider before installing or running it: (1) it will fetch arbitrary URLs you add (and built-in feeds) — avoid adding internal-only or private endpoints to prevent accidental SSRF/internal scanning from a privileged runtime; (2) feed content can contain links to malicious sites (the tool converts HTML to text and does not execute JS, but links and images may be preserved in output), so treat fetched content like untrusted input; (3) install dependencies in a virtualenv or sandbox (pip install -r requirements.txt) to limit scope; (4) no credentials are required, but if you plan to extend it to authenticated feeds, only provide per-feed auth you trust. Overall the package is coherent and low risk when used as intended.

Like a lobster shell, security has layers — review code before you run it.

aggregationvk971f2k7bxzgtxh97neqzcv30n85mp0econtentvk971f2k7bxzgtxh97neqzcv30n85mp0efeedvk971f2k7bxzgtxh97neqzcv30n85mp0elatestvk971f2k7bxzgtxh97neqzcv30n85mp0emonitoringvk971f2k7bxzgtxh97neqzcv30n85mp0enewsvk971f2k7bxzgtxh97neqzcv30n85mp0erssvk971f2k7bxzgtxh97neqzcv30n85mp0e
42downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

rss-news-aggregator

技能概述

RSS 订阅聚合与新闻抓取工具。支持多源 RSS 订阅抓取、文章摘要提取、关键词过滤、去重排序,自动聚合多平台新闻源为统一的阅读流。

何时使用

  • 需要自动抓取多个网站/博客的最新文章时
  • 需要监控特定关键词在行业新闻中的出现时
  • 需要对文章进行自动摘要和分类时
  • 需要将多个信息源合并为统一输出时
  • 需要定时获取新闻更新并做简单分析时

使用方法

基础用法

from scripts.rss_engine import RSSAggregator

agg = RSSAggregator()

# 添加订阅源
agg.add_feed("https://news.ycombinator.com/rss", name="Hacker News")
agg.add_feed("https://feeds.arstechnica.com/arstechnica/index", name="Ars Technica")

# 抓取所有文章
articles = agg.fetch_all(limit=20)
# -> [{"title": "...", "link": "...", "summary": "...", "source": "Hacker News", "published": "..."}]

# 按关键词过滤
filtered = agg.filter_by_keyword(articles, ["AI", "Python", "cloud"])

# 生成摘要报告
report = agg.generate_summary(filtered)

文件结构

rss-news-aggregator/
├── SKILL.md
├── README.md
├── requirements.txt
├── scripts/
│   └── rss_engine.py          # 核心引擎
├── examples/
│   └── basic_usage.py           # 使用示例
└── tests/
    └── test_rss.py             # 单元测试

依赖

  • feedparser — RSS/Atom 解析
  • requests — HTTP 请求
  • html2text — HTML 转纯文本摘要

标签

rss, news, aggregation, feed, monitoring, content

Comments

Loading comments...