Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Brand Monitor - 品牌舆情监控

v1.2.0

新能源汽车品牌舆情监控 - 自动搜索、分析国内平台的品牌提及情况

0· 418·1 current·1 all-time
bypig@wenxiaoyu
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
SKILL.md and README make conflicting claims: the English/Chinese description says the skill 'does not depend on third‑party search API' and uses a local crawler (crawler/search_crawler.py), but the shipped code is search_crawler_serpapi.py which requires SerpAPI (requests to serpapi.com) and an API key. The registry metadata lists no required env vars or primary credential, yet the code and docs require SERPAPI_KEY and the configuration requires a feishu_webhook. These mismatches mean the declared purpose/capabilities do not align with what the skill actually needs.
!
Instruction Scope
Prompts and SKILL.md instruct the agent to run the bundled Python crawler, to call web_fetch to retrieve full page contents for data enrichment, to push structured reports to an external Feishu webhook, and to store results in memory. The instructions reference reading config.json and system log path (~/.openclaw/logs/gateway.log) for troubleshooting. The skill's runtime instructions therefore perform network I/O to SerpAPI and arbitrary target URLs (web_fetch), and transmit data to a third‑party webhook — all expected for monitoring, but the SKILL.md earlier claimed 'no third‑party API', which is contradicted by the runtime steps.
Install Mechanism
There is no platform install spec in the registry (instruction-only), but the repository includes an install.sh that installs Python packages from crawler/requirements.txt (requests, beautifulsoup4, lxml). This is a common, low‑risk install pattern (no obscure external installers or downloads). The release/prepare script references wget of a GitHub release URL in generated release notes, which is normal for packaging and not executed on install.
!
Credentials
Registry metadata lists no required environment variables or primary credential, but code and multiple docs require SERPAPI_KEY and the config.json requires feishu_webhook. The skill therefore expects access to externally sensitive items (a search API key and a webhook URL) that are not declared in the registry metadata — this omission can lead users to unknowingly provide secrets. Other optional envs (SERPAPI_ENGINE, HTTP(S)_PROXY) are mentioned in docs but not declared either.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or system-wide agent settings. It saves monitoring reports to the agent's memory (expected for trend analysis) and writes a local config.json (created by install.sh). No privileged system changes are requested.
Scan Findings in Context
[network-call-serpapi] expected: The crawler issues HTTPS requests to https://serpapi.com/search using requests.get. This is expected if the skill uses SerpAPI, but it contradicts SKILL.md's 'no third‑party search API' claim.
[web_fetch-usage] expected: Prompts instruct use of the web_fetch tool to fetch full page content for enrichment. This results in additional outbound network fetches to target platform pages; it's expected for data completeness but increases external network surface.
What to consider before installing
This skill appears to implement multi‑platform brand monitoring, but there are notable inconsistencies you should address before installing or providing secrets. What to watch for and actions to take: 1. Clarify SerpAPI usage: The README/Prompt files and crawler code require a SERPAPI_KEY and make requests to serpapi.com, yet SKILL.md claims 'no third‑party search API.' If you plan to avoid third‑party services, do not provide SERPAPI_KEY and run only in --mock mode. If you will provide a key, understand SerpAPI's cost and that data flows to SerpAPI. 2. Secrets and webhooks: The skill expects a Feishu webhook URL in config.json. Treat that webhook as a secret (it can accept posted data). Only provide a webhook that is scoped to a channel you control and that does not grant wider org access. 3. Registry metadata mismatch: The registry lists no required env vars, but the code needs SERPAPI_KEY and the config needs feishu_webhook. Ask the author/maintainer to update the skill metadata to list required environment variables and credentials so automated gating and permission prompts are accurate. 4. Network exposure: The skill uses web_fetch to retrieve full pages (additional outbound requests), and will post findings to external endpoints (SerpAPI and Feishu). If you must run it in a sensitive environment, consider running inside an isolated VM or container with restricted network access and reviewing or sanitizing outputs sent to Feishu. 5. Test in mock mode and review outputs: Use the --mock option to exercise the skill without external network calls, inspect the JSON outputs, and review the code paths that extract and transform data (crawler/search_crawler_serpapi.py). Confirm data retention policies for saved 'memory' entries if you care about persistence. 6. Code review recommended: The repository is not large, so if you intend to run with real keys, review the crawler script and prompts to ensure it does not leak additional sensitive information (for example, it prints truncated API keys on install.sh; be cautious about logs). Request the maintainer fix the conflicting documentation (local vs SerpAPI) and update required env vars in registry metadata. If you prefer not to provide external API keys or webhook URLs, do not install the skill with real credentials; instead, run in mock mode or ask the author for a version that truly runs entirely locally.

Like a lobster shell, security has layers — review code before you run it.

latestvk977bm14j9bttqdhx8amjtmtsx81yzw1latest automotive monitoring chinese-platforms new-energy-vehiclevk976hxsd8dpvqxa65r9t4q8rrh81v8r0

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

新能源汽车品牌舆情监控 Skill

专为新能源汽车品牌打造的零代码舆情监控解决方案。自动监控小红书、微博、汽车之家、懂车帝、易车网、知乎、百度贴吧、抖音/快手等国内平台的品牌提及。

何时使用此 Skill

当用户提到以下内容时,激活此 skill:

  • "执行品牌监控"
  • "检查品牌提及"
  • "分析品牌趋势"
  • "检查品牌警报"
  • "舆情监控"
  • "品牌声誉"
  • 提到具体品牌名称 + "监控"、"分析"、"趋势"

核心功能

1. 每日监控 (monitor)

搜索国内各平台的品牌提及,分析情感和影响力,生成结构化报告。

监控平台:

  • 📕 小红书 (xiaohongshu.com) - 用户真实体验
  • 🔴 微博 (weibo.com) - 实时热点
  • 🚗 汽车之家 (autohome.com.cn) - 专业评测
  • 🎬 懂车帝 (dongchedi.com) - 视频评测
  • 🚙 易车网 (yiche.com) - 新车资讯
  • 🤔 知乎 (zhihu.com) - 深度讨论
  • 💬 百度贴吧 (tieba.baidu.com) - 车友交流
  • 🎵 抖音/快手 - 短视频

2. 实时警报 (alert)

每小时检测需要关注的提及:

  • 🚨 负面提及(情感 < -0.5,影响力 > 100)
  • 🔥 病毒式传播(互动数 > 5000)
  • ⚠️ 危机信号(安全、召回、自燃、维权等)
  • 👥 群体性投诉
  • 📰 汽车媒体报道

3. 趋势分析 (analyze-trend)

分析历史数据,生成趋势报告:

  • 📈 提及数量趋势
  • 😊 情感变化趋势
  • 📱 平台分布变化
  • 💪 影响力趋势
  • 🔥 热门话题演变

配置要求

在使用前,需要配置以下参数(通过 config.json 或对话中提供):

  • brand_name (必需): 要监控的品牌名称
  • brand_aliases (可选): 品牌别名列表,如车型名称
  • platforms (可选): 监控平台列表,默认全部国内平台
  • monitor_hours (可选): 监控时间范围(小时),默认 24
  • feishu_webhook (必需): 飞书机器人 Webhook URL

新能源汽车行业配置:

  • industry_specific.focus_keywords: 关注关键词(续航、充电、智驾等)
  • industry_specific.kol_min_followers: KOL 最小粉丝数阈值
  • industry_specific.media_accounts: 重点汽车媒体账号列表

Prompts

此 skill 包含三个主要 prompt 文件:

  1. prompts/monitor.md - 每日监控流程
  2. prompts/alert.md - 实时警报流程
  3. prompts/analyze-trend.md - 趋势分析流程

详细的执行指令请参考各 prompt 文件。

使用示例

执行每日监控

执行品牌监控

搜索过去 24 小时关于"理想汽车"的所有提及并分析

检查实时警报

检查品牌警报

立即检查是否有需要关注的负面提及

分析趋势

分析过去 7 天的品牌趋势

生成本周的品牌监控周报

报告格式

报告将通过飞书推送,包含:

  • 📊 总览统计(总数、情感分布)
  • 🔥 热门提及 Top 5(标注汽车媒体大V)
  • 📱 平台分布
  • 🔥 热门话题(续航、充电、智驾等)
  • 💡 关键洞察
  • 🎯 建议行动

汽车媒体识别

系统会自动识别并标注汽车媒体:

  • ⭐ 官方媒体(汽车之家、懂车帝、易车网等)
  • 🎖️ 认证编辑(媒体认证账号)
  • 👑 行业 KOL(粉丝数 > 10万的汽车博主)

新能源汽车特定问题识别

系统会特别关注新能源汽车行业的常见问题:

  • 续航虚标、续航焦虑
  • 充电故障、充电速度慢
  • 电池衰减、电池安全
  • 自燃、断轴、异响
  • 智驾故障、OTA 问题
  • 售后服务、维权

技术实现

此 skill 使用自定义 Python 爬虫实现搜索功能:

  • ✅ 不依赖第三方搜索 API(无需 Brave/Perplexity API Key)
  • ✅ 直接访问各平台获取数据
  • ✅ 支持代理配置,适合国内外用户
  • ✅ 可扩展,易于添加新平台
  • ✅ 所有数据处理在本地完成

安全性

此 skill:

  • ✅ 使用本地 Python 爬虫(crawler/search_crawler.py)
  • ✅ 只执行受控的爬虫脚本
  • ✅ 不发送数据到第三方服务器(除了配置的飞书 Webhook)
  • ✅ 遵守平台的使用规则和反爬虫策略

依赖

  • OpenClaw v2026.2.0+
  • 已配置的 LLM(Claude/GPT/Gemini)
  • 飞书账号(用于接收报告)

文档

故障排查

如果遇到问题:

  1. 检查 OpenClaw 是否正常运行:openclaw doctor
  2. 验证 skill 已加载:openclaw skills list | grep brand-monitor
  3. 检查配置文件:cat config.json
  4. 查看日志:tail -f ~/.openclaw/logs/gateway.log

贡献

欢迎贡献!请查看 README.md 了解贡献指南。

许可证

MIT License


Made with ❤️ for New Energy Vehicle Brands

Files

17 total
Select a file
Select a file to preview.

Comments

Loading comments…