Install
openclaw skills install kol-content-screeningScreen and rank Chinese social media KOLs by matching keyword content within a time window using web search aggregation, reporting evidence and confidence.
openclaw skills install kol-content-screeningScreen Chinese social media KOL lists for keyword-matching content within a time window. Output a ranked, evidence-backed report. Used heavily in PR / marketing / competitive intel work where you receive a "已知账号清单 + 关键词 + 时间窗" and need to know "谁发过、谁没发"。
Tell the user these before promising anything:
2025-05-05 ~ 2026-05-05) in the report header. Old content (>1 year) gets marked separately, not mixed into "active" set.If the user asks for accurate single-video互动量 排序, stop and warn: this needs paid data services. Get explicit acknowledgement before proceeding with web-only screening.
Always confirm 5 parameters before spawning sub-agents:
| Parameter | Example | Notes |
|---|---|---|
| Platforms | 抖音 + 小红书 + 头条 | Each platform = independent sub-task |
| Account list | (CSV/table from user) | Need: handle/UID + nickname + fan count + homepage URL |
| Keywords | 比亚迪 / BYD / 王传福 / DM-i / 仰望 | Include EN + CN + product lines + key person names |
| Time window | 近 12 个月 (YYYY-MM-DD ~ YYYY-MM-DD) | Compute exact dates; don't pass "近一年" verbatim |
| Sort dimension | 有内容档→粉丝量降序 / 互动量 / 关键词命中数 | Without 互动量数据来源, default to fan count desc within match-tier |
Common intake mistake: User pastes a Windows-clipboard HTML fragment (Version:1.0 StartHTML:...) — that's the raw clipboard envelope. The actual table data is below it. Parse account list directly from the rest of the paste.
For >15 accounts on one platform, split into groups of 8–10 and spawn parallel sub-agents. Empirically: 36 抖音 accounts → 4 groups of ~9, 24 小红书 accounts → 3 groups of 8.
Per platform → 拆 N 组 → 每组 1 sub-agent → 并行 → 各自写文件 → 主 session 汇总排序
Each sub-agent writes ONE file. File naming convention:
{platform-prefix}-{keyword-slug}-research-group{N}.md
where platform-prefix is douyin / xhs / tt / sph (视频号) / bilibili.
Sub-agent prompt template: see references/subagent-prompt-template.md.
Each sub-agent, for each account, runs at least two queries on the chosen web search tool (xiaosu-search or equivalent):
Q1: "{nickname}" {handle} {keyword}
Q2: "{nickname}" {keyword} site:{platform-domain}
Q3 (if Q1+Q2 weak): {nickname} {keyword} {YYYY} # last 12 months explicit
Where {platform-domain} is douyin.com / xiaohongshu.com / toutiao.com / etc.
For each hit, the sub-agent records:
For each account, the sub-agent must explicitly check for ID collision: search for the nickname alone, see if the top hits are this person's handle. If collision is detected (e.g. "南希Nancy" — multiple persons), flag it.
See references/platform-search-tips.md for platform-specific quirks (site filters, profile URL formats, common false positives).
Main session reads all group files and merges into one ranked table. Default ranking:
Tier 1 🟢 — 近一年内有明确证据(带 URL、日期、内容摘要)
Tier 2 🟡 — 仅旧内容(>窗口)/ 间接提及 / 证据较弱
Tier 3 🔴 — 公开检索未发现
Within each tier: sort by fan count desc by default. If user asked for interaction-based ranking but data is unavailable, state this explicitly in the report and fall back to fan count + provide caveat.
Final report structure: see references/output-schema.md.
Output to <workdir>/<keyword-slug>-kol-screening-{YYYYMMDD}.md (markdown table) plus per-platform group files. If user wants 飞书 Sheet, build the markdown first, then offer to push via lark-cli sheets (separate skill).
Document these in the report so the user can interpret correctly:
via 新浪 (转载).Every report MUST include a methodology disclaimer block at the top:
Template in references/output-schema.md.
references/subagent-prompt-template.mdreferences/platform-search-tips.mdreferences/output-schema.mdreferences/escalation.md