ℹ
Purpose & Capability
技能目标是“追踪全网热点并生成/优化爆款文章”,SKILL.md describes web 搜索(微博、知乎、抖音、百度等)和封面生成等完整流程 — 这 aligns with名称与描述。 但仓内没有任何网络采集/平台 API 的实现或依赖说明;该技能是 instruction-only(由 agent 用其浏览/网络能力去执行搜集),这本身可行但应该被明确(如何访问平台,是否需要 API key 或认证)。当前缺少说明可能导致 unexpected behavior if the agent attempts scraping or needs credentials.
!
Instruction Scope
SKILL.md tells the agent to perform broad, parallel web collection across many platforms and to run automated quality checks (it references scripts/check_prohibited_words.py). The included Python script is a local prohibited-words checker that: (a) contains a hard-coded file path (/workspace/projects/hotspot-article-generator/assets/article2_chip_breakthrough.txt) which does not appear in the manifest (so it falls back to an embedded example), and (b) exits with non-zero on failures. The script does not contact remote endpoints, but SKILL.md permits/encourages wide web scraping/searching — that gives the agent broad discretion to fetch external content. The combination of vague 'use network search tools' instructions + no explicit allowed/blocked endpoints is a scope creep / operational ambiguity risk.
✓
Install Mechanism
No install spec; skill is instruction-only with one small helper script. This minimizes installation risk (nothing downloaded or auto-executed on install).
✓
Credentials
The skill requests no environment variables, no credentials, and no config paths — appropriate for a content generation/quality-check skill. There are no declared secrets or unrelated credential requests.
✓
Persistence & Privilege
Flags: always=false (default), autonomous invocation allowed (platform default). The skill does not request persistent presence or modify other skills. No additional privilege escalation indicators present.
What to consider before installing
What to check before installing or enabling:
- Clarify how '热点采集' runs: SKILL.md expects the agent to pull data from many platforms but the package contains no scrapers or API integrations. Ask the author (or inspect how you'll invoke the skill) whether the agent will use its own web-browsing tool, third‑party APIs, or custom scrapers — and what endpoints will be contacted.
- Review the prohibited-words script behaviour: scripts/check_prohibited_words.py uses a hard-coded workspace path (which doesn't exist in the manifest) and falls back to an embedded example article. That means in practice it may only analyze its built-in sample unless you modify it. It also calls sys.exit(1) when violations exist, which could cause automated flows to abort — confirm this is intended.
- Confirm network/privacy expectations: because the instructions tell the agent to collect content from many public platforms, confirm whether any scraped content or generated drafts will be transmitted to external services you don't control (e.g., image-generation APIs or analytics endpoints). If the agent has browsing/network capability, review its outbound requests or run in a sandbox first.
- Test with non-sensitive data and in a sandbox: run the skill locally or in an isolated environment using your own sample articles to see what files/paths are accessed and whether the agent attempts unexpected network calls.
- If you plan to use platform APIs (Weibo, Douyin, Zhihu, etc.), provide credentials only when you understand why they're required and prefer creating scoped, limited credentials. The skill does not currently request any keys — if the implementation later asks for broad tokens, treat that as a red flag.
If you need higher assurance, ask the publisher for: (1) a description of how hotspot collection is implemented (endpoints, frequency, rate limits), (2) a version of the prohibited-words checker that accepts input paths or stdin instead of a hard-coded file, and (3) a clear data flow diagram showing where generated content and scraped sources are stored or transmitted.