Install
openclaw skills install redbookSearch, read, analyze, and automate Xiaohongshu (小红书) content via CLI
openclaw skills install redbookUse the redbook CLI to search notes, read content, analyze creators, automate engagement, and research topics on Xiaohongshu (小红书/RED).
OpenClaw users: Install via clawhub install redbook or npm install -g @lucasygu/redbook.
/redbook search "AI编程" # Search notes
/redbook read <url> # Read a note
/redbook user <userId> # Creator profile
/redbook analyze <userId> # Full creator analysis (profile + posts)
| Intent | Command |
|---|---|
| Search notes | redbook search "keyword" --json |
| Read a note | redbook read <url> --json |
| Get comments | redbook comments <url> --json --all |
| Creator profile | redbook user <userId> --json |
| Creator's posts | redbook user-posts <userId> --json |
| Browse feed | redbook feed --json |
| Search hashtags | redbook topics "keyword" --json |
| Analyze viral note | redbook analyze-viral <url> --json |
| Extract content template | redbook viral-template <url1> <url2> --json |
| Post a comment | redbook comment <url> --content "text" |
| Reply to comment | redbook reply <url> --comment-id <id> --content "text" |
| Batch reply (preview) | redbook batch-reply <url> --strategy questions --dry-run |
| Like a note | redbook like <url> |
| Unlike a note | redbook like <url> --undo |
| List favorites | redbook favorites --json or redbook favorites <userId> --json |
| Collect a note | redbook collect <url> |
| Remove from collection | redbook uncollect <url> |
| List followers | redbook followers <userId> --json |
| List following | redbook following <userId> --json |
| Delete own note | redbook delete <url> |
| Check note health | redbook health --json or redbook health --all --json |
| List user boards | redbook boards or redbook boards <userId> --json |
| List album notes | redbook board <board-url> or redbook board <boardId> --json |
| Render markdown to cards | redbook render content.md --style xiaohongshu |
| Publish image note | redbook post --title "..." --body "..." --images img.jpg |
| Check connection | redbook whoami |
Always add --json when parsing output programmatically. Without it, output is human-formatted text.
XHS is not Twitter or Instagram. These platform-specific engagement ratios reveal content type and audience behavior.
collected_count / liked_count)XHS's "collect" (收藏) is a save-for-later mechanic — users build personal reference libraries. This ratio is the strongest signal of content utility.
| Ratio | Classification | Meaning |
|---|---|---|
| >40% | 工具型 (Reference) | Tutorial, checklist, template — users bookmark for reuse |
| 20–40% | 认知型 (Insight) | Thought-provoking but not saved for later |
| <20% | 娱乐型 (Entertainment) | Consumed and forgotten — engagement is passive |
comment_count / liked_count)Measures how much a note triggers conversation.
| Ratio | Classification | Meaning |
|---|---|---|
| >15% | 讨论型 (Discussion) | Debate, sharing experiences, asking questions |
| 5–15% | 正常互动 (Normal) | Typical engagement pattern |
| <5% | 围观型 (Passive) | Users like but don't engage further |
share_count / liked_count)Measures social currency — whether users share to signal identity or help others.
| Ratio | Meaning |
|---|---|
| >10% | 社交货币 — people share to signal taste, identity, or help friends |
| <10% | Content consumed individually, not forwarded |
| Sort | What It Reveals |
|---|---|
--sort popular | Proven ceiling — the best a keyword can do |
--sort latest | Content velocity — how much is being posted now |
--sort general | Algorithm-weighted blend (default) |
| Form | Tendency |
|---|---|
图文 (image-text, type: "normal") | Higher collect rate — users save reference content |
视频 (video, type: "video") | Higher like rate — easier to consume passively |
Each module is a composable building block. Combine them for different analysis depths.
Answers: Which keywords have the highest engagement ceiling? Which are saturated vs. underserved?
Commands:
redbook search "keyword1" --sort popular --json
redbook search "keyword2" --sort popular --json
# Repeat for each keyword in your list
Fields to extract from each result's items[]:
items[].note_card.interact_info.liked_count — likes (may use Chinese numbers: "1.5万" = 15,000)items[].note_card.interact_info.collected_count — collectsitems[].note_card.interact_info.comment_count — commentsitems[].note_card.user.nickname — authorHow to interpret:
items[0] likes — the best-performing note for this keyword. This is the proven demand signal.items[0..9] — how well an average top note does.Output: Keyword × engagement table ranked by Top1 ceiling.
| Keyword | Top1 Likes | Top10 Avg | Top1 Collects | Collect/Like |
|---|---|---|---|---|
| keyword1 | 12,000 | 3,200 | 5,400 | 45% |
| keyword2 | 8,500 | 4,100 | 1,200 | 14% |
Answers: Which topic × scene intersections have demand? Where are the content gaps?
Commands:
# Combine base topic with scene/angle keywords
redbook search "base topic + scene1" --sort popular --json
redbook search "base topic + scene2" --sort popular --json
redbook search "base topic + scene3" --sort popular --json
Fields to extract: Same as Module A — Top1 liked_count for each combination.
How to interpret:
Output: Base × Scene heatmap.
scene1 scene2 scene3 scene4
base topic ████ 8K ██ 2K ████ 12K ░░ 200
Answers: What type of content is each keyword? Reference, insight, or entertainment?
Commands: Use search results from Module A, or for a single note:
redbook analyze-viral "<noteUrl>" --json
Fields to extract:
interact_info fieldsanalyze-viral: use pre-computed engagement.collectToLikeRatio, engagement.commentToLikeRatio, engagement.shareToLikeRatioHow to interpret: Apply the ratio benchmarks from XHS Platform Signals above.
Output: Per-keyword or per-note classification.
| Keyword | Collect/Like | Comment/Like | Type |
|---|---|---|---|
| keyword1 | 45% | 8% | 工具型 + 正常互动 |
| keyword2 | 12% | 22% | 娱乐型 + 讨论型 |
Answers: Who are the key creators in this niche? What are their strategies?
Commands:
# 1. Collect unique user_ids from search results across keywords
# Extract from items[].note_card.user.user_id
# 2. For each creator:
redbook user "<userId>" --json
redbook user-posts "<userId>" --json
Fields to extract:
user: interactions[] where type === "fans" → follower countuser-posts: notes[].interact_info.liked_count for all posts → compute avg, median, maxuser-posts: notes[].display_title → content patterns, posting frequencyHow to interpret:
Output: Creator comparison table.
| Creator | Followers | Avg Likes | Median | Max | Posts | Style |
|---|---|---|---|---|---|---|
| @creator1 | 12万 | 3,200 | 1,800 | 45,000 | 89 | Tutorial |
| @creator2 | 5.4万 | 8,100 | 6,500 | 22,000 | 34 | Story |
Answers: Do image-text or video notes perform better for this topic?
Commands:
redbook search "keyword" --type image --sort popular --json
redbook search "keyword" --type video --sort popular --json
Fields to extract:
liked_count and collected_count between the two result setstype field: "normal" = image-text, "video" = videoOutput: Form × engagement table.
| Form | Top1 Likes | Top10 Avg | Collect/Like |
|---|---|---|---|
| 图文 | 8,000 | 2,400 | 42% |
| 视频 | 15,000 | 5,100 | 18% |
Answers: Which keywords should I target? Where is the best effort-to-reward ratio?
Input: Keyword matrix from Module A.
Scoring logic:
Tier thresholds (based on Top1 likes):
| Tier | Top1 Likes | Meaning |
|---|---|---|
| S | >100,000 (10万+) | Massive demand — hard to compete but huge upside |
| A | 20,000–100,000 | Strong demand — competitive but winnable |
| B | 5,000–20,000 | Moderate demand — good for growing accounts |
| C | <5,000 | Niche — low competition, low ceiling |
Output: Tiered keyword list.
| Tier | Keyword | Top1 | Competition | Opportunity |
|---|---|---|---|---|
| A | keyword1 | 45K | Medium (6/10 >1K) | High |
| B | keyword3 | 12K | Low (2/10 >1K) | Very High |
| S | keyword2 | 120K | High (10/10 >1K) | Medium |
Answers: Who is the audience for this niche? What do they want?
Input: Engagement ratios from Module C + comment themes from analyze-viral + content patterns.
Fields to extract from analyze-viral JSON:
comments.themes[] — recurring phrases and keywords from comment sectioncomments.questionRate — % of comments that are questions (learning intent)engagement.collectToLikeRatio — save behavior signals intenthook.hookPatterns[] — what title patterns attract this audienceInference rules:
Output: Audience persona summary — demographics, intent, content preferences.
Answers: What specific content should I create, backed by data?
Input: Opportunity scores (Module F) + audience persona (Module G) + heatmap gaps (Module B).
For each content idea, specify:
hookPatterns that work for this nicheOutput: Ranked content ideas with data backing.
| # | Keyword | Hook Angle | Type | Target Likes | Reference |
|---|---|---|---|---|---|
| 1 | keyword3 | "N个方法..." (List) | 工具型 图文 | 5K+ | [top note URL] |
| 2 | keyword1 | "为什么..." (Question) | 认知型 视频 | 10K+ | [top note URL] |
Answers: Which comments deserve a reply? What is the comment quality distribution?
Commands:
# 1. Fetch all comments
redbook comments "<noteUrl>" --all --json
# 2. Preview reply candidates (dry run)
redbook batch-reply "<noteUrl>" --strategy questions --dry-run --json
# 3. Execute replies with template (5 min delay with ±30% jitter)
redbook batch-reply "<noteUrl>" --strategy questions \
--template "感谢提问!关于{content},..." \
--max 10
Fields to extract from --dry-run JSON:
candidates[].commentId — target commentscandidates[].isQuestion — boolean, detected questioncandidates[].likes — engagement signalcandidates[].hasSubReplies — whether already answeredskipped — how many comments were filtered outtotalComments — total fetchedStrategies:
questions — replies to comments ending with ? or ? (learning-oriented audience)top-engaged — replies to highest-liked comments (maximum visibility)all-unanswered — replies to comments with no existing sub-replies (fill gaps)How to interpret:
Safety: Hard cap 30 replies per batch, minimum 3-minute delay with ±30% jitter (default 5 min), --dry-run by default (no template = preview only), immediate stop on captcha. See Rate Limits & Safety for details.
Output: Reply plan table with candidate comments, strategy match reason, and status.
Answers: What structural template can I extract from successful notes to guide new content creation?
Commands:
# 1. Find top notes for a keyword
redbook search "keyword" --sort popular --json
# 2. Extract structural template from 2-3 top performers
redbook viral-template "<url1>" "<url2>" "<url3>" --json
Fields to extract from viral-template JSON:
dominantHookPatterns[] — hook types appearing in majority of notestitleStructure.commonPatterns[] — specific title formulatitleStructure.avgLength — target title lengthbodyStructure.lengthRange — target word count [min, max]bodyStructure.paragraphRange — target paragraph countengagementProfile.type — reference/insight/entertainmentaudienceSignals.commonThemes[] — what the audience talks aboutHow to interpret:
Composition with other modules:
Output: Content template spec — the structural skeleton for content creation. An LLM (via the composed workflow) uses this template to generate actual title, body, hashtags, and cover image prompt.
Answers: How should I manage ongoing engagement with my audience?
This module is a workflow that composes Modules I and J with human oversight.
Workflow:
redbook comments "<myNoteUrl>" --all --json to fetch recent commentsredbook batch-reply --strategy questions --dry-run to identify reply candidatesredbook batch-reply --strategy questions --template "..." --max 10Safety rules:
--dry-run first, human approval before executionhasSubReplies)Anti-spam guidelines:
Answers: How do I turn markdown content into Xiaohongshu-ready image cards?
Commands:
# Render markdown to styled PNG cards
redbook render content.md --style xiaohongshu
# Custom style and output directory
redbook render content.md --style dark --output-dir ./cards
# JSON output (for programmatic use)
redbook render content.md --json
Input: Markdown file with YAML frontmatter:
---
emoji: "🚀"
title: "5个AI效率技巧"
subtitle: "Claude Code 实战"
---
## 技巧一:...
Content here...
---
## 技巧二:...
More content...
Output: cover.png + card_1.png, card_2.png, ... in the same directory.
Card specs:
Pagination modes:
auto (default) — smart split on heading/paragraph boundaries using character-count heuristicseparator — manual split on --- in markdownHow to interpret:
puppeteer-core) — no browser download neededredbook post --images cover.png card_1.png ...Dependencies: Requires puppeteer-core and marked (optional, install with npm install -g puppeteer-core marked).
Composition with other modules:
redbook post --images for publishingAnswers: Are any of my notes being secretly rate-limited by XHS?
XHS assigns a hidden level field to each note in the creator backend API. This level controls recommendation distribution but is never shown in the UI. Your note may look "normal" while secretly receiving zero recommendations.
Commands:
# Check all notes (first page)
redbook health
# Check all pages
redbook health --all
# JSON output for programmatic use
redbook health --all --json
Level definitions:
| Level | Status | Meaning |
|---|---|---|
| 4 | 🟢 Normal | Full recommendation distribution |
| 2-3 | 🟡 Baseline | Basically normal, minor constraints |
| 1 | ⚪ New | Under review (new post) |
| -1 | 🔴 Soft limit | Mild throttling, decreased recommendations |
| -5 to -101 | 🔴 Moderate | Moderate throttling, minimal promotion |
| -102 | ⛔ Severe | Irreversible — must delete and repost |
Additional checks:
How to interpret:
Output: Terminal dashboard with color-coded distribution summary, limited notes list, and risk factor warnings.
Discovery credit: @xxx111god — xhs-note-health-checker
Combine modules for different analysis depths.
Modules: A → C → F
Search 3–5 keywords, classify engagement type, rank opportunities. Good for quickly validating whether a niche is worth deeper research.
Modules: A → B → E → F → H
Build keyword matrix, map topic × scene intersections, check content form performance, score opportunities, brainstorm specific content ideas.
Modules: A → D
Find who dominates a niche and study their content strategy, posting frequency, and engagement patterns.
Modules: A → B → C → D → E → F → G → H
The comprehensive playbook — keyword landscape, cross-topic heatmap, engagement signals, creator profiles, content form analysis, opportunity scoring, audience personas, and data-backed content ideas.
Command: redbook analyze-viral "<url>" --json
No module composition needed — analyze-viral returns hook analysis, engagement ratios, comment themes, author baseline comparison, and a 0-100 viral score in one call.
# 1. Find top notes
redbook search "keyword" --sort popular --json
# 2. Extract template from top 3 notes (replaces manual synthesis)
redbook viral-template "<url1>" "<url2>" "<url3>" --json
viral-template automates what previously required manual synthesis across analyze-viral results. It outputs a ContentTemplate JSON that captures dominant hooks, body structure ranges, engagement profile, and audience signals.
Modules: I
Single-module workflow for managing comment engagement on your notes. Use batch-reply --dry-run to audit, then execute with a template.
Modules: A → J → H → L
Keyword research → viral template extraction → data-backed content brainstorm → render to image cards. The template provides structural constraints that guide Module H's content ideas. Module L renders the final markdown to XHS-ready PNGs.
Modules: A → J → H → L → post
The full pipeline: research keywords → extract viral template → brainstorm content → write markdown → render to styled image cards → publish via redbook post --images cover.png card_1.png ...
Modules: M
Run redbook health --all periodically to catch throttled notes early. If level drops below 1, investigate the note's content for policy violations. Combine with Module I to check if throttled notes still have unanswered comments worth replying to.
Modules: A → C → I → J → K → M
Comprehensive automation playbook — keyword analysis, engagement classification, comment operations, viral replication templates, and engagement automation workflow.
XHS enforces aggressive anti-spam (风控) that detects automated behavior through device fingerprinting, activity ratio monitoring, and timing pattern analysis. The CLI applies safe defaults based on platform research.
| Action | Safe Interval | CLI Default | Hard Cap |
|---|---|---|---|
| Post a note | 3-4 hours (2-3 notes/day max) | N/A (manual) | — |
| Comment | ≥3 minutes | N/A (manual) | — |
| Reply | ≥3 minutes | N/A (manual) | — |
| Batch reply delay | ≥3 minutes | 5 min ±30% jitter | — |
| Batch reply count | — | 10 | 30 |
post, comment, and reply commands display safe interval reminders after each action.--dry-run first, review candidates, then execute--delay below 180000 (3 min)post commands 3-4 hours apart (2-3 notes/day maximum)The following operations work reliably via API:
The following operations are unreliable via API (frequently trigger captcha):
--private for higher success rate)The following operations require browser automation (not supported by this CLI):
redbook search <keyword>Search for notes by keyword. Returns note titles, URLs, likes, author info.
redbook search "Claude Code教程" --json
redbook search "AI编程" --sort popular --json # Sort: general, popular, latest
redbook search "Cursor" --type image --json # Type: all, video, image
redbook search "MCP Server" --page 2 --json # Pagination
Options:
--sort <type>: general (default), popular, latest--type <type>: all (default), video, image--page <n>: Page number (default: 1)redbook read <url>Read a note's full content — title, body text, images, likes, comments count.
redbook read "https://www.xiaohongshu.com/explore/abc123" --json
Accepts full URLs or short note IDs. Falls back to HTML scraping if API returns captcha.
redbook comments <url>Get comments on a note. Use --all to fetch all pages.
redbook comments "https://www.xiaohongshu.com/explore/abc123" --json
redbook comments "https://www.xiaohongshu.com/explore/abc123" --all --json
redbook user <userId>Get a creator's profile — nickname, bio, follower count, note count, likes received.
redbook user "5a1234567890abcdef012345" --json
The userId is the hex string from the creator's profile URL.
redbook user-posts <userId>List all notes posted by a creator. Returns titles, URLs, likes, timestamps.
redbook user-posts "5a1234567890abcdef012345" --json
redbook feedBrowse the recommendation feed.
redbook feed --json
redbook topics <keyword>Search for topic hashtags. Useful for finding trending topics to attach to posts.
redbook topics "Claude Code" --json
redbook favorites [userId]List a user's collected (bookmarked) notes. Defaults to the current logged-in user when no userId is provided.
redbook favorites --json # Your own favorites
redbook favorites "5a1234567890abcdef" --json # Another user's favorites
redbook favorites --all --json # Fetch all pages
Options:
--all: Fetch all pages of favorites (default: first page only)Note: Other users' favorites are only visible if they haven't set their collection to private.
redbook collect <url>Collect (bookmark) a note to your favorites.
redbook collect "https://www.xiaohongshu.com/explore/abc123"
redbook uncollect <url>Remove a note from your collection.
redbook uncollect "https://www.xiaohongshu.com/explore/abc123"
redbook analyze-viral <url>Analyze why a viral note works. Returns a deterministic viral score (0–100).
redbook analyze-viral "https://www.xiaohongshu.com/explore/abc123" --json
redbook analyze-viral "https://www.xiaohongshu.com/explore/abc123" --comment-pages 5
Options:
--comment-pages <n>: Comment pages to fetch (default: 3, max: 10)JSON output structure:
Returns { note, score, hook, content, visual, engagement, comments, relative, fetchedAt }.
score.overall (0–100) — composite of hook (20) + engagement (20) + relative (20) + content (20) + comments (20)hook.hookPatterns[] — detected title patterns (Identity Hook, Emotion Word, Number Hook, Question, etc.)engagement — likes, comments, collects, shares + ratios (collectToLikeRatio, commentToLikeRatio, shareToLikeRatio)relative.viralMultiplier — this note's likes / author's median likesrelative.isOutlier — true if viralMultiplier > 3comments.themes[] — top recurring keyword phrases from commentsredbook viral-template <url> [url2] [url3]Extract a reusable content template from 1-3 viral notes. Analyzes each note (same pipeline as analyze-viral) and synthesizes common structural patterns.
redbook viral-template "<url1>" "<url2>" "<url3>" --json
redbook viral-template "<url1>" --comment-pages 5 --json
Options:
--comment-pages <n>: Comment pages to fetch per note (default: 3, max: 10)JSON output structure:
Returns { dominantHookPatterns, titleStructure, bodyStructure, engagementProfile, audienceSignals, sourceNotes, generatedAt }.
dominantHookPatterns[] — hook types appearing in majority of input notestitleStructure.avgLength — average title length across notesbodyStructure.lengthRange — [min, max] body lengthengagementProfile.type — "reference" / "insight" / "entertainment"audienceSignals.commonThemes[] — merged comment themes across notesredbook comment <url>Post a top-level comment on a note.
redbook comment "<noteUrl>" --content "Great post!" --json
Options:
--content <text> (required): Comment textredbook reply <url>Reply to a specific comment on a note.
redbook reply "<noteUrl>" --comment-id "<commentId>" --content "Thanks for asking!" --json
Options:
--comment-id <id> (required): Comment ID to reply to (from comments --json output)--content <text> (required): Reply textredbook batch-reply <url>Reply to multiple comments using a filtering strategy. Always preview with --dry-run first.
# Preview which comments match the strategy
redbook batch-reply "<noteUrl>" --strategy questions --dry-run --json
# Execute replies with a template (default 5 min delay with jitter)
redbook batch-reply "<noteUrl>" --strategy questions \
--template "感谢提问!{content}" --max 10
Options:
--strategy <name>: questions (default), top-engaged, all-unanswered--template <text>: Reply template with {author}, {content} placeholders--max <n>: Max replies (default: 10, hard cap: 30)--delay <ms>: Delay between replies in ms (default: 300000 / 5 min, min: 180000 / 3 min). ±30% random jitter applied automatically.--dry-run: Preview candidates without posting (default when no template)Safety: Stops immediately on captcha. No template = dry-run only. Delays include random jitter to avoid uniform timing patterns that trigger XHS bot detection.
redbook render <file>Render a markdown file with YAML frontmatter into styled PNG image cards. Uses the user's existing Chrome installation — no browser download needed.
redbook render content.md --style xiaohongshu
redbook render content.md --style dark --output-dir ./cards
redbook render content.md --pagination separator --json
Options:
--style <name>: purple, xiaohongshu (default), mint, sunset, ocean, elegant, dark--pagination <mode>: auto (default), separator (split on ---)--output-dir <dir>: Output directory (default: same as input file)--width <n>: Card width in px (default: 1080)--height <n>: Card height in px (default: 1440)--dpr <n>: Device pixel ratio (default: 2)Requires: puppeteer-core and marked (npm install -g puppeteer-core marked). Does NOT require XHS cookies — purely offline rendering.
Override Chrome path: Set CHROME_PATH environment variable if Chrome is not in the standard location.
redbook whoamiCheck connection status. Verifies cookies are valid and shows the logged-in user.
redbook whoami
redbook post (Limited)Publish an image note. Frequently triggers captcha (type=124) on the creator API. Image upload works, but the publish step is unreliable. For posting, consider using browser automation instead.
redbook post --title "标题" --body "正文" --images cover.png --json
redbook post --title "测试" --body "..." --images img.png --private --json
Options:
--title <title>: Note title (required)--body <body>: Note body text (required)--images <paths...>: Image file paths (required, at least one)--topic <keyword>: Search and attach a topic hashtag--private: Publish as private noteAll commands accept:
--cookie-source <browser>: chrome (default), safari, firefox--chrome-profile <name>: Chrome profile directory name (e.g., "Profile 1"). Auto-discovered if omitted.--json: Output as JSONThe XHS API requires a valid xsec_token to fetch note content. Without it, read, comments, and analyze-viral return {}.
The same token is also required for shareable URLs. Any https://www.xiaohongshu.com/explore/<id> URL without ?xsec_token=...&xsec_source=... is 302-redirected by XHS's anti-scrape layer to https://www.xiaohongshu.com/404/sec_*?source=xhs_sec_server&originalUrl=.... This affects anyone who clicks the URL — Safari, iOS link previews, agent action buttons, etc.
webUrl — use this (since v0.7.0):
Every note-returning command (feed, search, user-posts, favorites, board, read, post) now includes a webUrl field with the token baked in and the correct xsec_source. Consumers should use webUrl directly — do not construct URLs by hand.
redbook feed --json | jq '.items[0].webUrl'
# => "https://www.xiaohongshu.com/explore/<id>?xsec_token=<t>&xsec_source=pc_feed"
xsec_source is set per-command: pc_feed, pc_search, pc_user, pc_board, pc_share.
Key rules:
?xsec_token=... from a previous session will return {}. Never cache or reuse old URLs.search and feed always return fresh tokens. Every item includes a valid xsec_token + a pre-built webUrl.{}. Running redbook read <noteId> without a token almost always fails.The correct workflow — always search first:
# WRONG — stale URL or bare noteId, will likely return {}
redbook read "689da7b0000000001b0372c6" --json
redbook read "https://www.xiaohongshu.com/explore/689da7b0?xsec_token=OLD_TOKEN" --json
# RIGHT — search first, then use the fresh URL with token
redbook search "AI编程" --sort popular --json
# Extract the webUrl from search results, then:
redbook read "<webUrl from search result>" --json
For agents: Prefer webUrl from the response. When only a bare noteId is available, search first to obtain a fresh token, then use the returned webUrl.
Commands that need xsec_token: read, comments, analyze-viral
Commands that do NOT need xsec_token: search, user, user-posts, feed, whoami, topics
The XHS API returns abbreviated numbers with Chinese unit suffixes:
| API value | Actual number |
|---|---|
"1.5万" | 15,000 |
"2.4万" | 24,000 |
"1.2亿" | 120,000,000 |
"115" | 115 |
万 = ×10,000. 亿 = ×100,000,000. Numbers under 10,000 are plain integers as strings.
The analyze-viral command handles this automatically. When parsing --json output manually, watch for these suffixes in interact_info fields (liked_count, collected_count, etc.).
| Error | Meaning | Fix |
|---|---|---|
{} empty response | Missing or expired xsec_token | Search first to get a fresh token |
| "No 'a1' cookie" | Not logged into XHS in browser | Log into xiaohongshu.com in Chrome |
| "Session expired" | Cookie too old | Re-login in Chrome |
| "NeedVerify" / captcha | Anti-bot triggered | Wait and retry, or reduce request frequency |
| "IP blocked" (300012) | Rate limited | Wait or switch network |
When producing analysis reports, use these formats:
Data tables: Markdown tables with exact field mappings. Always include the metric unit.
Heatmaps: ASCII bar charts for cross-topic comparison:
职场 生活 教育 创业
AI编程 ████ 8K ██ 2K ████ 12K ░░ 200
Claude Code ██ 3K ░░ 100 ██ 4K █ 1K
Creator comparison: Structured table with both quantitative metrics and qualitative style assessment.
Final reports: Use this section order:
import { XhsClient } from "@lucasygu/redbook";
import { loadCookies } from "@lucasygu/redbook/cookies";
const cookies = await loadCookies("chrome");
const client = new XhsClient(cookies);
const results = await client.searchNotes("AI编程", 1, 20, "popular");
const topics = await client.searchTopics("Claude Code");
--cookie-source)puppeteer-core and marked (npm install -g puppeteer-core marked). Uses your existing Chrome — no additional browser download.