每日新闻总结

v1.0.0

每日从多个权威新闻来源(BBC RSS、南华早报、36氪、TechCrunch、The Verge、Microsoft/NVIDIA官方博客等)抓取最新资讯,自动生成四板块新闻日报:国际重要新闻、中国重要新闻、AI重要新闻、AI科技巨头动态。输出带原文链接的Markdown文件。当用户需要生成每日新闻摘要、科技资...

0· 56·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mzcnyhhd/daily-news-summary.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "每日新闻总结" (mzcnyhhd/daily-news-summary) from ClawHub.
Skill page: https://clawhub.ai/mzcnyhhd/daily-news-summary
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install daily-news-summary

ClawHub CLI

Package manager switcher

npx clawhub@latest install daily-news-summary
Security Scan
Capability signals
Requires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description promise (daily news digest from RSS and news sites) matches the runtime instructions: the SKILL.md lists public RSS and webpage URLs to fetch, categorization rules, and a Markdown output format. There are no unrelated environment variables, binaries, or install steps requested.
Instruction Scope
Instructions only ask the agent to use a web_fetch tool to retrieve listed RSS and webpages, extract headlines/content, categorize into four sections, include source links, and save a Markdown file at workspace root. They also describe creating a recurring automation via WorkBuddy's automation_update. The doc does not instruct reading other local files or credentials. Minor operational gaps: no mention of respecting robots.txt, rate limiting, or scraping constraints, and the workflow assumes web_fetch has permission to access those sites.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest risk from installation. Nothing is downloaded or written by an installer.
Credentials
The skill declares no required environment variables or credentials. It mentions optional use of Twitter/X API if the user supplies a Bearer Token, but explicitly states that the skill does not require it. One minor note: scheduling via WorkBuddy likely uses that tool's credentials or permissions (not declared here) — the skill itself doesn't request unrelated secrets.
Persistence & Privilege
always:false (normal). However, the skill explicitly instructs using WorkBuddy's automation_update to create recurring daily tasks; that creates persistent scheduled runs outside a one-off invocation. This is expected for an automation-oriented digest skill, but users should be aware creating recurring automations grants ongoing activity ability to the agent/tooling and may require separate approval in your environment.
Assessment
This skill appears to do what it says (scrape public RSS/pages and produce a Markdown digest). Before installing: 1) Confirm what 'web_fetch' and 'WorkBuddy' tools are in your environment and what network/permission access they require — scheduling an automation will use WorkBuddy permissions. 2) Ensure you are comfortable with the agent writing files to the workspace root and with recurring scheduled tasks; prefer an isolated folder if concerned. 3) Verify scraping policy: the skill doesn't mention robots.txt, rate limits, or legal/copyright constraints — consider adding politeness (rate limiting) and user approval for any heavy scraping. 4) If you enable optional Twitter/X integration, only provide a token you trust and understand what it can access. 5) Monitor the first few runs to confirm sources and formatting, and revoke/disable the automation if outputs are unexpected.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e7a6hz8e2qh4dp281pz88xh85gjjf
56downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

Daily News Digest

每日新闻日报生成器。从17个权威来源并行抓取新闻,整理为四板块结构化日报。

Workflow

Step 1: Fetch Sources

使用 web_fetch 工具并行抓取以下来源。RSS源优先,部分失败不影响整体流程。

国际新闻:

中国综合新闻:

中国深度报道:

AI+科技行业(RSS优先):

科技巨头官方动态:

补充:

Step 2: Categorize Content

从抓取结果中提取并分类到四板块:

板块一:国际重要新闻

  • 中东局势、地缘冲突、大国关系
  • 全球经济、金融市场、能源价格
  • 重大国际事件
  • 优先使用BBC RSS + 南华早报RSS

板块二:中国重要新闻

  • 宏观政策、外交动态
  • 财经数据、上市公司重要公告
  • 科技发射、社会热点
  • 南华早报RSS有大量大中华区深度报道

板块三:AI重要新闻

  • 大模型发布与迭代
  • AI产品与应用落地
  • 行业趋势与深度分析
  • 优先使用36氪RSS、TechCrunch RSS、The Verge RSS、量子位、机器之心

板块四:AI科技巨头动态

  • 按公司分类列出最新动态(OpenAI、Anthropic、Google、Microsoft、Meta、NVIDIA、DeepSeek、华为、字节、阿里等)
  • 每条动态附简要解读
  • 优先使用Microsoft Source RSS、NVIDIA Blog RSS、官方博客

Step 3: Generate Markdown Report

文件格式要求:

  • 标题:每日新闻日报 | YYYY年MM月DD日
  • 标注生成时间和信息来源
  • 每条新闻末尾必须标注来源名称,并附上原文URL链接(格式:([来源名称](URL))
  • 包含"今日最值得关注的Top 5事件"
  • 末尾包含"信息来源说明"表格
  • 语言:中文

Step 4: Save File

  • 保存路径:工作区根目录
  • 文件名格式:新闻日报_YYYY-MM-DD.md

Optional: Automation

用户可要求创建定时自动化任务(如"每天早上8点自动生成")。此时使用 WorkBuddy 的 automation_update 工具创建 recurring 任务,将上述 Workflow 作为自动化 prompt,schedule 设为 FREQ=DAILY;BYHOUR=8;BYMINUTE=0

News Sources Reference

详见 references/news_sources.md,包含17个来源的详细分类、URL、类型和可靠性评级。

Notes

  • 如果某个网站抓取失败,继续用其他成功抓取的来源生成日报
  • BBC RSS、36氪RSS、TechCrunch RSS、The Verge RSS、南华早报RSS、NVIDIA Blog RSS、Microsoft Source RSS是目前最稳定可靠的来源
  • Reuters网页版和RSS经常失败,如果失败则依赖BBC+南华早报+新浪获取国际新闻
  • 联合早报RSS已停用,Google/Meta官方博客RSS暂不可用
  • 务必保留每条新闻的原文链接,方便用户点击阅读原文
  • 不要编造新闻,只基于实际抓取到的内容整理
  • Twitter/X API 可作为可选扩展(需用户自备 Bearer Token),本 Skill 不涉及

Comments

Loading comments...