社交媒体研究助手Skill
v1.0.2通过本机 media-agent-crawler HTTP 服务搜集 B站/抖音/YouTube/知乎内容(不依赖 MCP 客户端安装)。当用户要搜集这些平台内容、并已在本机启动应用(默认 http://127.0.0.1:39002)时使用。
⭐ 0· 174·0 current·0 all-time
by梅花三十三@sansan-mei
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description match the actual behavior: the skill is a client for a local media-agent-crawler service (default http://127.0.0.1:39002). The included scripts and SKILL.md describe only crawl/list/get operations for the declared platforms and do not request unrelated cloud credentials or access to unrelated subsystems.
Instruction Scope
Runtime instructions and scripts only construct JSON payloads and call REST (/start-crawl/...) or MCP (/mcp) endpoints on the configured base URL. They do not read arbitrary files, shell history, or system credentials. The SKILL.md correctly documents endpoints, parameters (including optional cookies), and expected behavior.
Install Mechanism
There is no install spec (instruction-only skill) and no downloads/extraction. The provided scripts are simple wrappers that use curl and node via inline node -e snippets; they do not install external code.
Credentials
The skill does not require credentials and does not declare required env vars in registry metadata, but SKILL.md and scripts reference an optional BIL_CRAWL_URL environment variable. The scripts also assume command-line tools (curl, node) exist even though 'required binaries' lists none. Also note the optional 'cookies' parameter can contain session cookies — providing those grants the crawler access to account-protected content and should be considered sensitive.
Persistence & Privilege
always:false and no special persistence is requested. The skill does not modify other skills or system settings. The agent may invoke it autonomously (default), which is normal — this simply allows the agent to call the local crawler when relevant.
Assessment
This skill is a local client for a crawler running on your machine. Before installing or using it: 1) Ensure the crawler service (Electron app) is really running on localhost (or on a host you trust). If you set BIL_CRAWL_URL, do not point it at an untrusted remote server — the skill will send URLs/arguments there. 2) Be cautious when supplying 'cookies' strings (they can contain login tokens). 3) The package metadata omits required binaries: the scripts call curl and node; make sure those are present and review their versions. 4) If you want stronger assurance, inspect or run the actual media-agent-crawler service code (the skill only calls that service). Overall the skill appears coherent for its stated purpose, with the above operational caveats.Like a lobster shell, security has layers — review code before you run it.
latestvk97dvh95hfpdqvtm7x7wjaryts838fnh
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
