Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

小红书转Obsidian

v1.0.0

Extract Xiaohongshu (小红书) posts into Obsidian notes. Use when user shares a Xiaohongshu link and wants to save it as a markdown note. Supports single posts,...

0· 79·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jmin1113/xhs-to-obsidian.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "小红书转Obsidian" (jmin1113/xhs-to-obsidian) from ClawHub.
Skill page: https://clawhub.ai/jmin1113/xhs-to-obsidian
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install xhs-to-obsidian

ClawHub CLI

Package manager switcher

npx clawhub@latest install xhs-to-obsidian
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code (extract_post.py + video_transcribe.sh) implements scraping Xiaohongshu posts and saving Markdown to an Obsidian folder, which is coherent with the name/description. It legitimately uses cookies for authenticated scraping. However, the SKILL.md and the script disagree on the default cookie path (SKILL.md: ~/.openclaw/xhs-cookies.json; script default: ~/cookies.json), which is an inconsistency the user should be aware of.
!
Instruction Scope
Instructions explicitly ask the user to export their Xiaohongshu cookies (containing session/auth tokens) and save them to disk; the code reads those cookies and uses them to fetch pages. This is expected for authenticated scraping but is sensitive (storing auth cookies plaintext is risky). The code fetches HTML and video URLs and downloads video content locally; that behavior matches the stated purpose and does not show hidden network destinations. However, extract_post.py disables TLS verification (ssl.CERT_NONE), which weakens security and could allow MITM attacks — this is a notable implementation concern.
Install Mechanism
No install spec (instruction-only + included scripts). Optional dependencies (ffmpeg, mlx-whisper/whisper) are invoked only for transcription; installing via brew/pip is suggested. Nothing is downloaded from unknown third-party URLs in an install step.
Credentials
The skill requests no environment variables but requires a cookie file to be created by the user; that is proportional to the task. Still: cookies contain sensitive session data stored unencrypted on disk, and that risk is not called out in the SKILL.md. Also the mismatch in default cookie path between docs and script could lead to confusion or accidental exposure.
Persistence & Privilege
The skill is not always-enabled and does not request elevated or cross-skill privileges. It writes notes to the user-specified Obsidian folder and temporary files under /tmp, which is consistent with its purpose.
What to consider before installing
This skill appears to do what it says (scrape Xiaohongshu posts and optionally transcribe videos), but exercise caution before installing/using it: - Sensitive cookies: The tool requires you to export your Xiaohongshu cookies (session tokens) and save them to a local JSON file. Those cookies grant access to your account and should be treated like a password — do NOT share them. Consider creating a throwaway account if you are unsure. - Plaintext storage: Cookies are stored in plaintext in your home directory by default; remove or rotate them after use and restrict file permissions (chmod 600) if you keep them. - TLS verification disabled: extract_post.py creates an SSL context with certificate verification disabled (ssl.CERT_NONE). This reduces protection against network-level interception (MITM). If you only run this on trusted networks, risk is lower, but ideally the script should be modified to enable verification. - Inconsistent defaults: SKILL.md and extract_post.py disagree on the default cookie path; always pass explicit --cookies and --output arguments to avoid surprises. - Review code before running: The repository is small and readable — inspect the scripts yourself or run them in a sandboxed environment first. If you accept the risks, run with an account you can afford to expose and remove stored cookies after use. If you want, I can point out the exact lines to change to re-enable SSL verification and to harmonize the default cookie path.

Like a lobster shell, security has layers — review code before you run it.

latestvk971c97bkyve85drqb0mzfh5a58438gr
79downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

xhs-to-obsidian

把小红书帖子一键提取为 Obsidian Markdown 笔记。

核心流程

  1. 检查 Cookies → 2. 提取内容 → 3. 视频转录(如有) → 4. 保存笔记

常量定义

常量默认值
Cookies~/.openclaw/xhs-cookies.json
Obsidian 目录~/Documents/Obsidian Vault/xhs

Step 0: 检查并设置 Cookies

Cookie 文件不存在时,引导用户从 Chrome 导出:

  1. 在 Chrome 打开 xiaohongshu.com 并登录
  2. 打开 DevTools (F12) → Console
  3. 运行以下代码复制 cookies:
copy(JSON.stringify(document.cookie.split('; ').map(c => {
  const [name, ...rest] = c.split('=');
  return { name, value: rest.join('='), domain: '.xiaohongshu.com', path: '/',
    expires: Date.now()/1000 + 86400*30, size: name.length + rest.join('=').length,
    httpOnly: false, secure: false, session: false, priority: 'Medium',
    sameParty: false, sourceScheme: 'Secure', sourcePort: 443 };
})))
  1. 保存到 ~/.openclaw/xhs-cookies.json

Step 1: 提取帖子

python3 {baseDir}/scripts/extract_post.py "<小红书URL>" --cookies ~/.openclaw/xhs-cookies.json --output ~/Documents/Obsidian\ Vault/xhs

输出为 JSON,包含 successfilepathtype(image/video)、video_url 等字段。

错误处理:

  • COOKIES_NOT_FOUND → 引导用户导出 cookies(见 Step 0)
  • POST_NOT_AVAILABLE → 帖子不可见(可能需要重新登录)
  • COOKIES_EXPIRED → cookies 过期,重新导出

Step 2: 视频转录(如帖子为视频)

如果返回 type: video 且包含 video_url,执行转录:

bash {baseDir}/scripts/video_transcribe.sh "<video_url>" "<post_id>" "<output_dir>"

转录完成后,将文本追加到笔记的 ## 视频转录 段落。

依赖(可选):

  • ffmpeg — 音频提取
  • mlx-whisperwhisper — 语音识别

安装:brew install ffmpeg && pip install mlx-whisper

Step 3: 保存笔记

extract_post.py 已自动保存。如需手动整理,格式如下:

# 标题(一句话洞察,非描述)

内容...

---

> **来源**: 小红书 · 作者名
> **日期**: YYYY-MM-DD
> **互动**: N赞 / N收藏 / N评论
> **标签**: tag1, tag2
> **链接**: https://www.xiaohongshu.com/explore/...

批量提取

多链接用换行分隔:

while read -r url; do
  python3 {baseDir}/scripts/extract_post.py "$url"
done <<EOF
https://www.xiaohongshu.com/explore/...
https://www.xiaohongshu.com/explore/...
EOF

Comments

Loading comments...