Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

News Aggregator Skill

v1.0.0

Comprehensive news aggregator that fetches, filters, and deeply analyzes real-time content from 28 sources including Hacker News, GitHub, Hugging Face Papers...

0· 54·0 current·0 all-time
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match what the code does: many fetchers, Playwright-based deep fetch, and report generation for 28 sources. The README claims 'zero-config' (no API keys) which aligns with the code (no required env vars), but the project does require system-level dependencies (Playwright + Chromium) not declared in registry metadata — mismatch between claimed 'instruction-only/zero-config' and real installation needs.
!
Instruction Scope
SKILL.md instructs the agent to fetch sites, enrich content, translate to Simplified Chinese, save reports to disk, and run an interactive menu triggered by a magic phrase. It also contains rules that permit 'Smart Fill' supplementation when results are scarce (potentially fabricating items even though other rules say 'Only use data from JSON'), and it mandates always saving reports to reports/YYYY-MM-DD/. MISTAKES.md documents past behavior where the maintainer read arbitrary files (root artifacts) — indicating the runtime workflow has previously included searching the filesystem for data. These broaden the skill's scope beyond pure fetching/formatting and increase risk of unwanted reads/writes or hallucinated output.
!
Install Mechanism
Registry lists no install spec, but the bundle contains Python scripts that require dependencies and Playwright (README and SKILL.md instruct pip install -r requirements.txt and 'playwright install chromium'). The lack of a formal install entry in registry metadata is inconsistent: users/agents may run code without ensuring dependencies are installed. implementation_plan.md proposes adding a crontab and new scripts (daily_scan.sh, generate_basic_report.py) that are not present in the manifest — this discrepancy is notable and raises risk because scheduling/persistence is being proposed but not implemented in the published package.
Credentials
No environment variables, API keys, or config paths are requested in the registry metadata. The code uses public HTTP endpoints and scraping; it does not request secrets. This is proportionate to a news aggregator's purpose.
Persistence & Privilege
The skill as published does not set always:true and does not require model-disable. However implementation_plan.md explicitly asks the user to approve installing a cron job for daily automated scans (this would be a persistence/privilege escalation if installed). That cron proposal appears in a plan file rather than in active code, so it's not yet enforced — still, it's a clear request that would require elevated permission and explicit user approval before being enabled.
Scan Findings in Context
[unicode-control-chars] unexpected: The SKILL.md contains detected Unicode control characters. These are not needed for a news-aggregator and can be used to hide or manipulate text/prompts (prompt-injection risk). Review SKILL.md for invisible characters and any hidden instructions before trusting the skill.
What to consider before installing
This skill appears to implement what it claims (web scraping + Playwright-based deep fetch + markdown reports) but has several worrying inconsistencies and operational risks. Before installing or enabling it: - Do NOT grant system-level scheduling (crontab) or run suggested cron commands without manual review; the implementation_plan proposes this but it is not present as an installed artifact. Scheduling would allow persistent, autonomous network access. - Treat the SKILL.md as potentially adversarial: remove or inspect any invisible Unicode control characters and any 'magic phrase' triggers (the skill listens for the phrase “如意如意”). - Install and run in a sandboxed environment first (container or VM). Verify dependencies (pip packages and Playwright + Chromium) are installed explicitly; the registry lacks a formal install spec even though scripts need these runtimes. - Audit scripts that launch Playwright and write files (reports/YYYY-MM-DD/) — check file paths and ensure the skill cannot read unrelated user files. MISTAKES.md shows the author previously read files outside the expected path, so confirm the runtime does not search or read arbitrary filesystem locations. - Be aware of the 'Smart Fill' behavior: the skill can supplement missing items (marked with ⚠️) which could lead to fabricated entries; if you need strictly factual output, disable supplementing behavior or require manual review. If you want to proceed: run the code locally in a restricted environment, manually install Playwright and Chromium, inspect all scripts (especially any new scripts that would modify crontab or other system state), and only opt into automation after understanding and approving the exact crontab command and scripts involved.

Like a lobster shell, security has layers — review code before you run it.

latestvk9742ascf4c1g67150b8fj88e984fgbk
54downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

News Aggregator Skill

Fetch real-time hot news from 28 sources, generate deep analysis reports in Chinese.


🔄 Universal Workflow (3 Steps)

Every news request follows the same workflow, regardless of source or combination:

Step 1: Fetch Data

# Single source
python3 scripts/fetch_news.py --source <source_key> --no-save

# Multiple sources (comma-separated)
python3 scripts/fetch_news.py --source hackernews,github,wallstreetcn --no-save

# All sources (broad scan)
python3 scripts/fetch_news.py --source all --limit 15 --deep --no-save

# With keyword filter (auto-expand: "AI" → "AI,LLM,GPT,Claude,Agent,RAG")
python3 scripts/fetch_news.py --source hackernews --keyword "AI,LLM,GPT" --deep --no-save

Step 2: Generate Report

Read the output JSON and format every item using the Unified Report Template below. Translate all content to Simplified Chinese.

Step 3: Save & Present

Save the report to reports/YYYY-MM-DD/<source>_report.md, then display the full content to the user.


📰 Unified Report Template

All sources use this single template. Show/hide optional fields based on data availability.

#### N. [标题 (中文翻译)](https://original-url.com)
- **Source**: 源名 | **Time**: 时间 | **Heat**: 🔥 热度值
- **Links**: [Discussion](hn_url) | [GitHub](gh_url)     ← 仅在数据存在时显示
- **Summary**: 一句话中文摘要。
- **Deep Dive**: 💡 **Insight**: 深度分析(背景、影响、技术价值)。

Source-Specific Adaptations

Only the differences from the universal template:

SourceAdaptation
Hacker NewsMUST include [Discussion](hn_url) link
GitHubUse 🌟 Stars for Heat, add Lang field, add #Tags in Deep Dive
Hugging FaceUse 🔥 +N upvotes for Heat, include [GitHub](url) if present, write 深度解读 (not just translate abstract)
WeiboPreserve exact heat text (e.g. "108万")

🛠️ Tools

fetch_news.py

ArgDescriptionDefault
--sourceSource key(s), comma-separated. See table below.all
--limitMax items per source15
--keywordComma-separated keyword filterNone
--deepDownload article text for richer analysisOff
--saveForce save to reports dirAuto for single source
--outdirCustom output directoryreports/YYYY-MM-DD/

Available Sources (28)

CategoryKeyName
Global NewshackernewsHacker News
36kr36氪
wallstreetcn华尔街见闻
tencent腾讯新闻
weibo微博热搜
v2exV2EX
producthuntProduct Hunt
githubGitHub Trending
AI/TechhuggingfaceHF Daily Papers
ai_newslettersAll AI Newsletters (aggregate)
bensbitesBen's Bites
interconnectsInterconnects (Nathan Lambert)
oneusefulthingOne Useful Thing (Ethan Mollick)
chinaiChinAI (Jeffrey Ding)
memiaMemia
aitoroiAI to ROI
kdnuggetsKDnuggets
PodcastspodcastsAll Podcasts (aggregate)
lexfridmanLex Fridman
80000hours80,000 Hours
latentspaceLatent Space
EssaysessaysAll Essays (aggregate)
paulgrahamPaul Graham
waitbutwhyWait But Why
jamesclearJames Clear
farnamstreetFarnam Street
scottyoungScott Young
dankoeDan Koe

daily_briefing.py (Morning Routines)

Pre-configured multi-source profiles:

python3 scripts/daily_briefing.py --profile <profile>
ProfileSourcesInstruction File
generalHN, 36Kr, GitHub, Weibo, PH, WallStreetCNinstructions/briefing_general.md
financeWallStreetCN, 36Kr, Tencentinstructions/briefing_finance.md
techGitHub, HN, Product Huntinstructions/briefing_tech.md
socialWeibo, V2EX, Tencentinstructions/briefing_social.md
ai_dailyHF Papers, AI Newslettersinstructions/briefing_ai_daily.md
reading_listEssays, Podcasts(Use universal template)

Workflow: Execute script → Read corresponding instruction file → Generate report following both the instruction file AND the universal template.


⚠️ Rules (Strict)

  1. Language: ALL output in Simplified Chinese (简体中文). Keep well-known English proper nouns (ChatGPT, Python, etc.).
  2. Time: MANDATORY field. Never skip. If missing in JSON, mark as "Unknown Time". Preserve "Real-time" / "Today" / "Hot" as-is.
  3. Anti-Hallucination: Only use data from the JSON. Never invent news items. Use simple SVO sentences. Do not fabricate causal relationships.
  4. Smart Keyword Expansion: When user says "AI" → auto-expand to "AI,LLM,GPT,Claude,Agent,RAG,DeepSeek". Similar expansions for other domains.
  5. Smart Fill: If results < 5 items in a time window, supplement with high-value items from wider range. Mark supplementary items with ⚠️.
  6. Save: Always save report to reports/YYYY-MM-DD/ before displaying.

📋 Interactive Menu

When the user says "如意如意" or asks for "menu/help":

  1. Read templates.md
  2. Display the menu
  3. Execute the user's selection using the Universal Workflow above

Requirements

  • Python 3.8+, pip install -r requirements.txt
  • Playwright (for HF Papers & Ben's Bites): playwright install chromium

Comments

Loading comments...