Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

ai-agent-news-aggregator

v1.0.0

搜集 AI Agent 领域最新资讯并推送到飞书群聊。使用 DuckDuckGo 搜索 + RSS 源监控, 自动过滤、去重、摘要,生成每日/每周简报。适用于 AI Agent 行业动态追踪、 技术进展监控、竞品信息收集。支持定时任务(cron)自动推送。

1· 182·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jiashuoji838-afk/ai-agent-news-aggregator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "ai-agent-news-aggregator" (jiashuoji838-afk/ai-agent-news-aggregator) from ClawHub.
Skill page: https://clawhub.ai/jiashuoji838-afk/ai-agent-news-aggregator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai-agent-news-aggregator

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai-agent-news-aggregator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The files and SKILL.md align with an AI-agent news aggregator that searches DDG, monitors RSS and pushes to Feishu. The code implements deduplication, categorization, summarization, and formatting for Feishu. However, the implementation expects platform tools (web_fetch, OpenClaw LLM/session send) to complete key steps; standalone scripts are placeholders in places (e.g., search_news returns a search_url rather than scraping results). Overall purpose is plausible but some implementation gaps reduce coherence.
!
Instruction Scope
SKILL.md and run_pipeline.py instruct running a full pipeline end-to-end, but search_news.py returns only a DDG search URL (it does not fetch or parse search results) and run_pipeline.py does not call any web-fetch or RSS-parsing tool between search_news and deduplicate. That means, as-distributed, the pipeline will produce empty 'items' unless the OpenClaw agent or external tools perform the web_fetch/RSS parsing step. Also the package ships with a populated sources.json containing a Feishu channel_id; run_pipeline/push scripts will use that value if not changed, potentially sending aggregated content to that external recipient. These two issues are scope/instruction mismatches that could lead to unexpected behavior or data leakage.
Install Mechanism
No install spec is provided and requirements.txt states only standard library usage. No downloads/install commands that would write arbitrary external code were found. This is low-risk from an installation point of view.
!
Credentials
The skill requests no environment variables or credentials, which is reasonable because it relies on OpenClaw-provided tools. However, the included scripts/sources.json contains a hard-coded Feishu channel_id (oc_6936052e1c870df24dc1fd757dec77fd). If a user runs the pipeline without editing sources.json, data (search results, summaries) would be formatted and directed to that external channel. The absence of credentials reduces some risk, but the preconfigured recipient is unexpected and could cause unintended data exposure.
Persistence & Privilege
always is false and the skill does not request persistent system-level privileges or attempt to modify other skills or system config. It relies on being invoked by the agent or run manually. Autonomous invocation is allowed by default (disable-model-invocation: false) but that is normal for skills; no additional persistent privileges are requested.
What to consider before installing
This skill mostly does what it says (dedupe, categorize, summarize, format for Feishu), but two things to watch before installing or running it: 1) The included search script only returns a DuckDuckGo search URL instead of fetching and parsing results—so the pipeline as-distributed will not produce real 'items' unless your OpenClaw environment or an extra web_fetch/RSS parser step runs. 2) sources.json contains a pre-filled Feishu channel_id. If you run the pipeline without editing that file, messages (including possibly collected or internal content) could be sent to that external channel. Recommendations: (a) Inspect and edit scripts/sources.json to set your own channel_id or remove it; (b) test in dry-run mode (run_pipeline --dry-run) and use test_push.py to verify send behavior; (c) confirm how OpenClaw's sessions_send/web_fetch/LLM calls are performed in your environment (who ultimately has access to send messages); (d) avoid scheduling autonomous runs until you confirm the fetch/parsing step and the push target are configured safely. If you need, provide info about your OpenClaw runtime (how web_fetch and sessions_send are wired) and I can re-evaluate with higher confidence.

Like a lobster shell, security has layers — review code before you run it.

latestvk970nnrrv7cxej0359rwng9ka5833r2w
182downloads
1stars
1versions
Updated 8h ago
v1.0.0
MIT-0

AI Agent 资讯聚合技能

功能概述

本技能自动搜集 AI Agent 相关最新资讯,整理后推送到飞书群聊。

核心能力

  • 🔍 多源搜索 - DuckDuckGo 搜索 + RSS 源监控
  • 🧹 智能去重 - 基于标题/URL 相似度合并相同新闻
  • 📝 自动摘要 - 为每条新闻生成一句话摘要
  • 📊 分类整理 - 按技术进展/公司动态/行业应用分类
  • 🚀 飞书推送 - 直接发送到群聊或私聊

数据源

搜索关键词(使用 ddg-search)

  • "AI Agent framework"
  • "LangChain new release"
  • "AutoGen update"
  • "Multi-agent system research"
  • "Agentic AI"
  • "CrewAI"
  • "LlamaIndex"

RSS 源(使用 blogwatcher)

  • Hacker News AI/ML tag
  • r/LocalLLaMA
  • Anthropic Blog
  • OpenAI Blog
  • Hugging Face Blog
  • LangChain Blog

详细配置见 scripts/sources.json


使用方法

一次性搜集

{
  "action": "collect",
  "time_range": "24h",
  "channel_id": "oc_xxxxxx"
}
参数说明默认值
time_range搜集时间范围24h
channel_id飞书会话 ID当前会话

定时推送(配合 cron)

{
  "action": "schedule",
  "cron": "0 9 * * 1-5",
  "channel_id": "oc_xxxxxx",
  "time_range": "24h"
}

示例:工作日每天 9 点推送前一天的资讯


输出格式(飞书消息)

🤖 AI Agent 每日简报 - 2026-03-16

🔥 头条
• LangChain 发布新 Agent 框架 - 支持 XX 功能 [链接]

🛠️ 框架更新
• AutoGen v0.4.0 - 新增多 Agent 协作 [链接]
• CrewAI 支持 XX [链接]

📚 研究论文
• [论文标题] - arXiv [链接]

🏢 公司动态
• Anthropic 发布 XX [链接]

💼 行业应用
• XX 公司用 Agent 实现 XX [链接]

---
共 12 条资讯 | 来源:DDG + 6 RSS 源

脚本说明

scripts/search_news.py

调用 ddg-search 搜索多个关键词,返回原始结果列表。

输入:

{
  "keywords": ["AI Agent", "LangChain"],
  "time_range": "24h"
}

输出:

{
  "items": [
    {"title": "...", "url": "...", "snippet": "...", "source": "ddg"},
    ...
  ]
}

scripts/deduplicate.py

基于标题和 URL 相似度去重。

输入:

{
  "items": [...],
  "threshold": 0.85
}

输出:

{
  "items": [...],
  "removed_count": 5
}

scripts/summarize.py

调用 LLM 为每条新闻生成一句话摘要。

输入:

{
  "items": [...],
  "max_length": 50
}

输出:

{
  "items": [
    {"title": "...", "url": "...", "summary": "..."},
    ...
  ]
}

scripts/push_to_feishu.py

格式化消息并推送到飞书。

输入:

{
  "items": [...],
  "channel_id": "oc_xxxxxx",
  "date": "2026-03-16"
}

输出:

{
  "success": true,
  "message_id": "msg_xxxxxx"
}

scripts/sources.json

配置数据源和推送目标。

{
  "keywords": [
    "AI Agent framework",
    "LangChain",
    "AutoGen",
    "CrewAI",
    "Multi-agent system"
  ],
  "rss_sources": [
    "https://news.ycombinator.com/newest",
    "https://www.anthropic.com/news/rss.xml",
    "https://openai.com/blog/rss/"
  ],
  "feishu": {
    "channel_id": "oc_xxxxxx"
  },
  "filters": {
    "min_relevance": 0.7,
    "max_items_per_category": 5
  }
}

完整工作流程

1. search_news.py    → 从 DDG + RSS 抓取原始内容
        ↓
2. deduplicate.py    → 去重(基于相似度)
        ↓
3. categorize.py     → 分类(头条/框架/论文/公司/应用)
        ↓
4. summarize.py      → 生成摘要
        ↓
5. push_to_feishu.py → 格式化并推送

依赖工具

  • ddg-search - DuckDuckGo 网页搜索
  • blogwatcher - RSS 源监控(可选)
  • web_fetch - 抓取网页详情(可选)

配置步骤

  1. 编辑 scripts/sources.json

    • 设置你的飞书 channel_id
    • 自定义搜索关键词
    • 添加/删除 RSS 源
  2. 测试运行

    python scripts/search_news.py
    
  3. 设置定时任务(可选) 使用 cron 技能设置每日/每周自动推送


注意事项

  • 首次运行需要安装 Python 依赖(见 scripts/requirements.txt)
  • 飞书 channel_id 可从群聊 URL 或消息元数据中获取
  • 如需推送到私聊,使用你的个人 session_id

Comments

Loading comments...