市场洞察师

v1.0.0

全球网络文学市场洞察智能体。当需要分析网文市场趋势、调研平台政策、竞品数据监控、选题数据支撑时触发本技能。用于: (1) 分析Amazon KDP/起点/番茄/晋江等平台的网文市场机会 (2) 追踪各平台政策变化和算法偏好 (3) 监控竞品小说数据(阅读量/完读率/收入估算) (4) 生成市场分析报告为选题策划提...

0· 31·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description, platform guide, and data_collector.py align: all focus on scraping public platform rankings and producing reports. No unrelated credentials, binaries, or installers are requested.
Instruction Scope
SKILL.md instructs only to collect public data, obey robots.txt, and avoid PI—this stays within scope. It references storing data under data/raw, data/reports and a config file config/platforms.yaml (platform accounts/API config). The script files currently contain TODOs and do not implement hidden exfiltration. Note: SKILL.md and script expect platform account/API config (and possibly cookies/proxies) even though the skill does not declare any required credentials; that is an operational gap rather than explicit malicious behavior.
Install Mechanism
No install spec; the skill is instruction-only plus a small local Python script. Nothing is downloaded or executed from remote URLs. Lowest-risk install posture.
Credentials
The skill declares no required environment variables, which is consistent with included files. However SKILL.md references config/platforms.yaml for platform accounts/API configuration and the platform guide mentions author dashboards and APIs that may require cookies, API keys, or proxies. Expect the operator to supply credentials or proxies outside the declared requirements—store them securely and review config files before enabling.
Persistence & Privilege
always:false and no system-wide changes are requested. The skill writes data to local data/ directories (normal for a collector). It does not request to modify other skills or force inclusion.
Assessment
This skill is coherent with its stated purpose and contains only local, reviewable code, but it's incomplete: crawlers are TODO stubs. Practical operation will likely require you to provide platform credentials, cookies, or proxies (and possibly API keys) via the referenced config/platforms.yaml or other means — those secrets would be stored on your system, so review how and where you store them. Before enabling: (1) inspect config/platforms.yaml and any place you would put credentials; (2) run the script in a sandboxed environment with restricted network access while you test; (3) confirm crawling respects robots.txt and platform ToS; (4) consider limiting autonomous invocation or granting the skill only when needed. If you need higher assurance, request the author to declare required env vars and provide a fully implemented, auditable crawler implementation.

Like a lobster shell, security has layers — review code before you run it.

latestvk972n0eqb6t55j40hh04a8gb1x843vv0

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments