Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AI Daily News

Automated AI daily news collection and reporting system. Collects AI papers from arXiv, Hugging Face, AI products from Product Hunt, YouTube videos from AI c...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 25 · 0 current installs · 0 all-time installs
byJosephHou_BY@josephleohou-ui
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (collect AI news and push to Feishu) matches the code: collectors for arXiv, Hugging Face, Product Hunt, YouTube and RSS. However, the code references an 'agent-browser' CLI as an alternate scraping mechanism (browser_fallback.run_agent_browser_command) which is not declared as a required binary in metadata and is not described in the Quick Start; that is an undocumented external dependency and reduces coherence.
!
Instruction Scope
SKILL.md instructs the user to install dependencies and Playwright and to run the collection/push scripts. The code, however, executes shell commands (subprocess.run with shell=True) and will attempt to pip-install yt-dlp at runtime if missing. The browser fallback supports both Playwright and an 'agent-browser' CLI and will evaluate JS snippets via agent-browser eval — this introduces runtime execution of arbitrary code via shell and a pathway for command execution that is not fully documented in SKILL.md. The skill also writes config.json, logs, and output data/daily_news.json to disk (expected), and posts collected data to Feishu webhook URLs provided in the config (expected).
Install Mechanism
There is no formal install spec (instruction-only), which is low-level risk, but the SKILL.md expects users to run 'pip install -r references/requirements.txt' and 'playwright install chromium'. The code may auto-run pip install (yt-dlp) at runtime. No archives or remote code downloads are embedded in the skill package itself. The undisclosed dependency on an 'agent-browser' executable (invoked via shell) is the main install/operational mismatch.
Credentials
The skill does not request environment variables in metadata. Credentials are provided via config.json (Feishu webhook_url, optional app_id/app_secret) which is aligned with the purpose of pushing reports. There are no requests for unrelated secrets or cloud credentials in the code. The presence of extra fields in config (app_id/app_secret) that are not used by push logic is benign but slightly sloppy.
Persistence & Privilege
The skill does not request permanent 'always' inclusion and does not autonomously alter other skills or global agent settings. It provides a scheduler script that, if run by the user, will run daily collection/push jobs — expected for this purpose.
What to consider before installing
This skill appears to do what it claims (collect news, generate a report, post to Feishu), but there are a few things to check before installing or running it: 1) Inspect config.json (config.json created by setup_config.py) — it stores your Feishu webhook and any app credentials in plaintext; only populate webhook/app secrets you trust sending to this skill and consider using a least-privileged webhook. 2) The code may call an undocumented 'agent-browser' CLI for fallback scraping; if you don't have or trust such a binary, disable/fix browser_fallback.py or ensure Playwright fallback alone is used. 3) The code executes shell commands (subprocess.run with shell=True) and may eval JS via agent-browser; run the skill in an isolated environment (container or VM) rather than on a sensitive host. 4) The youtube collector will attempt to pip-install yt-dlp at runtime if missing — prefer pre-installing dependencies yourself (pip install -r references/requirements.txt and playwright install chromium) and review references/requirements.txt for packages. 5) If you want tighter assurance, review browser_fallback.py and any places that build shell command strings (agent-browser eval, subprocess.check_call) to confirm no untrusted input is passed into shell commands. 6) Because the skill writes files (logs, data/daily_news.json) and performs network calls to various public feeds and to your configured Feishu webhook, run it with network access and filesystem access limited to a dedicated directory. Given the undocumented agent-browser invocation and shell execution paths, proceed with caution; if you cannot review the code yourself, run only in an isolated/test environment.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk975gav3qbgbzq71syg3d78f4d831p5v

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

AI Daily News Skill

Automatically collect and report AI news from multiple sources with fallback browser scraping.

Quick Start

# Install dependencies
pip install -r references/requirements.txt
playwright install chromium

# Configure
python scripts/setup_config.py

# Run collection
python scripts/collect_ai_news.py

# Generate and push report
python scripts/push_to_feishu.py

Supported Data Sources

SourcePrimary MethodFallback Method
arXiv PapersRSS APIPlaywright browser
Hugging Face PapersRSS FeedPlaywright browser
Product HuntRSS FeedPlaywright browser
YouTube AI Creatorsyt-dlpPlaywright browser
PaperWeeklyRSSrequests
Custom RSSfeedparserrequests

Configuration

Edit references/config.example.json or run setup_config.py:

{
  "feishu": {
    "webhook_url": "https://open.feishu.cn/open-apis/bot/v2/hook/xxx",
    "chat_id": "oc_xxx"
  },
  "sources": {
    "arxiv": {"enabled": true, "categories": ["cs.CL", "cs.LG", "cs.AI"]},
    "youtube": {
      "enabled": true,
      "creators": ["andrew_ng", "matt_wolfe", "ai_explained", "greg_isenberg"]
    },
    "paperweekly": {"enabled": true, "rss_url": ""}
  }
}

YouTube Creators

Available creator keys:

  • andrew_ng - 吴恩达 (DeepLearning.AI)
  • matt_wolfe - Matt Wolfe
  • ai_explained - AI Explained
  • ai_with_oliver - AI with Oliver
  • greg_isenberg - Greg Isenberg

Scripts Overview

ScriptPurpose
collect_ai_news.pyMain collector with fallback logic
youtube_collector.pyYouTube video collection
rss_collector.pyRSS feed collection
browser_fallback.pyBrowser-based fallback scraping
push_to_feishu.pyReport generation and Feishu push
daily_scheduler.pyScheduled task runner
setup_config.pyInteractive configuration setup

Fallback Mechanism

When primary methods (RSS/API/yt-dlp) fail:

  1. Automatically retries with browser-based scraping
  2. Uses Playwright for JavaScript-rendered pages
  3. Seamless integration - same output format
  4. Logs fallback usage for monitoring

Report Format

Generated reports include:

  • 📚 arXiv papers with abstracts
  • 🚀 Product Hunt AI products
  • 🤗 Hugging Face papers
  • 📺 YouTube video summaries
  • 📰 PaperWeekly interpretations
  • 📊 Source statistics

Troubleshooting

arXiv returns 0 papers: Check days_back parameter or network connection YouTube fails: Ensure yt-dlp is installed; fallback to Playwright available RSS timeouts: Browser fallback will attempt direct requests Feishu push fails: Verify webhook URL and chat_id in config

Advanced: Adding Custom Sources

  1. Add RSS feed to rss section in config
  2. Or implement new collector in scripts/
  3. Register in collect_ai_news.py
  4. Add fallback method in browser_fallback.py

See references/DEVELOPMENT.md for detailed extension guide.

Files

13 total
Select a file
Select a file to preview.

Comments

Loading comments…