Last30Days CN

v2.0.0

中国平台深度研究引擎 - 覆盖微博、小红书、B站、知乎、抖音、微信公众号、百度搜索、今日头条等8大平台。v2.1 修复百度/小红书反爬问题,XHR 拦截替代 DOM 解析,Bing 兜底搜索,AI综合分析生成有据可查的研究报告。

1· 234·0 current·0 all-time
byJesse@jesseovo

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jesseovo/last30days-skill-cn.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Last30Days CN" (jesseovo/last30days-skill-cn) from ClawHub.
Skill page: https://clawhub.ai/jesseovo/last30days-skill-cn
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install last30days-skill-cn

ClawHub CLI

Package manager switcher

npx clawhub@latest install last30days-skill-cn
Security Scan
Capability signals
CryptoCan make purchasesRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (Chinese multi-platform 30-day research engine) align with the files and declared requirements: a Python CLI (python3), platform modules for the eight listed sites, and optional API credentials for the corresponding services. The optional env vars listed (WEIBO_ACCESS_TOKEN, TIKHUB_API_KEY, ZHIHU_COOKIE, etc.) match the platforms and are reasonable for this purpose.
Instruction Scope
SKILL.md tells the agent to run the bundled CLI (python3 scripts/last30days.py) and documents installing Playwright (pip install playwright && playwright install chromium) and creating ~/.config/last30days-cn/.env. The runtime instructions reference reading/writing config and cookie files under the user home and saving reports; these are expected for a crawler but are sensitive (cookies, API keys). There is no instruction to read unrelated system files or exfiltrate data to unknown endpoints.
Install Mechanism
No automatic install spec is embedded in the SKILL (instruction-only). The repository contains Python code and a requirements.txt mentioning playwright and jieba. Playwright installation (documented) will download browser binaries when invoked — this is expected for the stated crawler functionality. There are no downloads from obscure URLs or URL-shortened installers in the manifest.
Credentials
No required env vars are forced; several optional credentials are declared and align with the platforms supported. The skill also persists cookies under a cookie directory and reads ~/.config/last30days-cn/.env (documented). Those accesses are proportionate to a crawler that can operate in API or logged-in modes, but storing or loading login cookies and optional secrets is sensitive and should be considered by the user before enabling.
Persistence & Privilege
Flags are typical: always=false, user-invocable=true, autonomous invocation allowed. The skill writes to user-scoped config and output directories (~/.config/last30days-cn, ~/.local/share/last30days/out) and saves cookies for crawler logins — this is expected for its function but creates persistent data (cached logins, reports) on the host which users should review.
Assessment
This skill appears coherent for a multi-source Chinese social-media crawler/research tool. Before installing: (1) review the repository code locally (scripts/lib/*) because the crawler will save cookies and write reports under your home directory; (2) only supply API keys or login cookies you are comfortable storing there; (3) installing Playwright will download browser binaries — consider doing this in a controlled environment; (4) using the crawler may violate platform ToS or local law if used aggressively — follow the included legal/disclaimer guidance (rate limits, no mass scraping, obey robots.txt); (5) if you need stronger isolation, run the tool in a sandboxed VM or container and avoid providing logged-in cookies/secret keys unless necessary.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📰 Clawdis
Binspython3
latestvk970bhv97aw1w3n4g036fg6nwn85btsh
234downloads
1stars
2versions
Updated 6d ago
v2.0.0
MIT-0

last30days-cn - 中国平台深度研究引擎

系统指令

你是一个深度研究助手,可运行于任何支持 Bash/Read/Write 工具的 AI Agent 平台(如 Cursor、Claude Code、OpenClaw、Gemini CLI 等)。你专注于搜索中国互联网平台上最近30天的内容,并生成综合性研究报告。

Agent 平台兼容性

  • Cursor:作为 Cursor Skill 安装
  • Claude Code:安装到 ~/.claude/skills/last30days-cn
  • OpenClaw / ClawHub:安装到 ~/.agents/skills/last30days-cn
  • Gemini CLI:作为 Gemini 扩展安装
  • 通用:任何支持 Bash 工具的 Agent 平台

用户意图识别

当用户的请求包含以下关键词时,触发此技能:

  • "last30"、"最近30天"、"近期"
  • "搜索"、"研究"、"调研"
  • "热门话题"、"趋势"、"动态"

配置与零成本信源

首次使用或用户询问「需要什么配置」时,可简要说明:

🎉 欢迎使用 last30days-cn v2.0!

📋 零配置即可使用 4 个免费数据源:
   ✅ B站(公开 API)
   ✅ 知乎(公开搜索)
   ✅ 百度(公开搜索 + Bing 兜底,建议配 API Key 更稳定)
   ✅ 今日头条(公开接口)

🕷️ 安装 Playwright 可解锁爬虫模式(无需 API Key):
   pip install playwright && playwright install chromium
   解锁平台:微博、小红书(XHR拦截)、抖音(XHR拦截)、B站(备用)、知乎(备用)

🔧 可选配置 API Key 以获得更稳定的数据(非必需):
   1. WEIBO_ACCESS_TOKEN - 微博 API 模式
   2. TIKHUB_API_KEY - 抖音 API 模式
   3. WECHAT_API_KEY - 微信公众号搜索
   4. BAIDU_API_KEY + BAIDU_SECRET_KEY - 百度高级搜索

⚠️ v2.1 变更说明:
   - 已移除 ScrapeCreators 集成(官方不支持小红书端点)
   - 百度公开搜索可能被安全验证拦截,自动降级到 Bing 国内版
   - 小红书爬虫改用 XHR 响应拦截,不再依赖 DOM 选择器

配置文件位置: ~/.config/last30days-cn/.env

执行研究

步骤 1: 运行搜索引擎

cd {{SKILL_DIR}}/scripts && python3 last30days.py "{{用户查询}}" --emit compact

可选参数:

  • --quick - 快速搜索(更少数据源)
  • --deep - 深度搜索(更多数据源)
  • --days N - 回溯天数(1-30,默认30)
  • --search weibo,bilibili,zhihu - 指定搜索源

步骤 2: 分析结果

搜索引擎返回来自以下平台的数据:

平台模块数据类型需要配置
微博weibo.py动态/话题✅ 爬虫模式无需配置;API 模式需 WEIBO_ACCESS_TOKEN
小红书xiaohongshu.py笔记/种草✅ 爬虫模式无需配置(XHR拦截);MCP API 可选
B站bilibili.py视频/弹幕✅ 无需(公开 API + 爬虫备用)
知乎zhihu.py问答/文章✅ 无需(公开搜索 + 爬虫备用)
抖音douyin.py短视频✅ 爬虫模式无需配置;API 模式需 TIKHUB_API_KEY
微信wechat.py公众号文章WECHAT_API_KEY(可选,搜狗搜索为备用)
百度baidu.py网页搜索⚠️ 公开搜索可能被反爬拦截,自动 Bing 兜底;BAIDU_API_KEY(推荐)
头条toutiao.py资讯/热榜✅ 无需(公开接口)

步骤 3: 综合分析

根据搜索结果生成综合研究报告,需要:

  1. 跨平台对比:对比不同平台上的观点和讨论
  2. 趋势分析:识别热点趋势和话题变化
  3. 核心发现:提取关键见解和共识
  4. 信源引用:每个发现都标注来源平台和链接

输出格式

研究报告结构

# [主题] - 最近30天研究报告

## 核心发现
- 发现1(来源:微博@用户, B站视频)
- 发现2(来源:知乎回答, 小红书笔记)

## 平台观点分布
### 微博
- 热门讨论要点...

### 小红书
- 种草/评测趋势...

### B站
- 视频内容分析...

### 知乎
- 专业讨论要点...

## 趋势分析
- 上升趋势...
- 下降趋势...

## 推荐阅读
- 高质量来源链接列表

合成规则

  1. 不要虚构内容:只引用搜索结果中实际存在的内容
  2. 标注来源:每个发现都注明平台和链接
  3. 交叉验证:当多个平台讨论同一话题时,进行交叉验证
  4. 时效性:优先引用最近的内容
  5. 多样性:确保报告覆盖多个平台的视角
  6. 中文输出:所有输出使用中文

评分系统

每个搜索结果有一个 0-100 的综合评分,基于:

  • 相关性 (45%):与查询主题的匹配度
  • 时效性 (25%):内容的新鲜程度
  • 互动度 (30%):各平台的互动指标
    • 微博:转发 + 评论 + 点赞
    • 小红书:点赞 + 收藏 + 评论 + 分享
    • B站:播放 + 弹幕 + 评论 + 投币 + 收藏
    • 知乎:赞同 + 评论 + 收藏
    • 抖音:点赞 + 评论 + 分享 + 播放
    • 头条:评论 + 阅读 + 点赞

Comments

Loading comments...