Chinese NLP Toolkit

v1.0.0

Specialized natural language processing for Chinese text. Covers segmentation (jiaba), sentiment analysis, keyword extraction, text summarization, tone detec...

0· 357·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 371166758-qq/chinese-nlp-toolkit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Chinese NLP Toolkit" (371166758-qq/chinese-nlp-toolkit) from ClawHub.
Skill page: https://clawhub.ai/371166758-qq/chinese-nlp-toolkit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install chinese-nlp-toolkit

ClawHub CLI

Package manager switcher

npx clawhub@latest install chinese-nlp-toolkit
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (Chinese NLP: segmentation, sentiment, keywords, summarization, format conversion) match the SKILL.md content. The instructions describe algorithms and heuristics appropriate for these tasks and do not request unrelated resources.
Instruction Scope
SKILL.md stays on-topic: it provides step-by-step guidance for Chinese text processing, edge cases, and output formats. It does not instruct the agent to read system files, environment variables, or transmit data to external endpoints. Note: the skill is purely guidance (no implementation), so actual runtime behavior depends on any concrete implementation the agent or user builds from these instructions.
Install Mechanism
No install spec or code files are present. Nothing will be downloaded or written by the skill itself (lowest risk install profile).
Credentials
No required environment variables, credentials, or config paths are declared or referenced in the instructions. Requested privileges are proportional (none).
Persistence & Privilege
always is false and the skill is user-invocable; it does not request permanent presence or system configuration changes.
Assessment
This skill is an instruction-only guide for Chinese NLP and appears internally consistent and low-risk because it requests no installs or secrets. Important things to consider before using or implementing it: (1) There is no code here — if you or the agent installs libraries (jieba, pypinyin, zhconv, third-party sentiment APIs), make sure those packages come from trusted sources and review them before installing. (2) The skill will be used to process text; if that text is sensitive, confirm any implementation does not send data to external services you don't trust. (3) The skill owner/source is unknown and no homepage is provided — if you plan to use a packaged implementation labeled with this skill, inspect the implementation for network calls, credentials usage, or unexpected file I/O. (4) If you need production reliability (tokenization, NER, domain dictionaries), prefer vetted libraries and explicitly review their dependencies.

Like a lobster shell, security has layers — review code before you run it.

chinesevk979yb9vg0v0n98g5fm121vff183p0bzlatestvk979yb9vg0v0n98g5fm121vff183p0bznlpvk979yb9vg0v0n98g5fm121vff183p0bzpinyinvk979yb9vg0v0n98g5fm121vff183p0bzsegmentationvk979yb9vg0v0n98g5fm121vff183p0bzsentimentvk979yb9vg0v0n98g5fm121vff183p0bztext-analysisvk979yb9vg0v0n98g5fm121vff183p0bz
357downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Chinese NLP Toolkit

Process and analyze Chinese text with specialized NLP capabilities.

Core Capabilities

1. Text Segmentation (分词)

Chinese has no word boundaries. Segmentation is the foundation of all Chinese NLP.

Approach: Use rule-based heuristics when no library is available:

  • Dictionary matching (maximum forward/backward matching)
  • Context-aware: "南京市长江大桥" → ["南京市", "长江大桥"] not ["南京", "市长", "江大桥"]
  • Domain-specific terms should be added as custom dictionary entries

Common Ambiguities:

TextWrong SplitCorrect Split
雨伞雨/伞雨伞 (compound)
结婚的和尚未结婚的结婚/的/和尚/未/结婚/的结婚/的/和/尚未/结婚/的
项目部项目/部项目部 (compound)

2. Sentiment Analysis (情感分析)

Beyond positive/negative — Chinese sentiment is nuanced:

Intensity levels: 强烈负面 < 偏负面 < 中性 < 偏正面 < 强烈正面

Chinese-specific signals:

  • Rhetorical questions often indicate negative sentiment: "这也算好?"
  • Sarcasm markers: "呵呵", "厉害了", "也是醉了", "你开心就好"
  • Intensifiers: "非常", "特别", "简直了", "超级"
  • Diminishers: "还行吧", "马马虎虎", "凑合"

Emoji contribution (critical for social media):

  • 😊👍❤️ = positive amplification
  • 😤👎💔 = negative amplification
  • 🙄🙄🙄 = sarcasm/disdain (intensity scales with repetition)

3. Keyword Extraction (关键词提取)

For Chinese text, prioritize:

  • Noun phrases (名词短语)
  • Domain-specific terminology
  • Named entities (人名、地名、机构名)

Method: TF-IDF adapted for Chinese + positional weighting (first/last sentences carry more weight in Chinese writing).

4. Text Summarization (文本摘要)

Chinese-specific rules:

  • Summarize to 20-30% of original length
  • Preserve key numbers, names, and claims
  • Chinese articles often "bury the lead" — the conclusion may be more important than the introduction
  • Extract key sentences using positional + keyword scoring

5. Readability Scoring (可读性评分)

Rate Chinese text on a 1-10 scale considering:

  • Average sentence length (characters per sentence)
  • Vocabulary difficulty (HSK level estimate)
  • Clause density ( commas per sentence)
  • Use of classical Chinese elements
  • Technical jargon density
ScoreLevelTarget Audience
1-3EasyGeneral public
4-6ModerateEducated readers
7-8HardDomain experts
9-10Very HardAcademic specialists

6. Format Conversion

ConversionExample
Simplified → Traditional体验 → 體驗
Traditional → Simplified體驗 → 体验
Chinese → Pinyin你好 → nǐ hǎo
Chinese → Zhuyin你好 → ㄋㄧˇ ㄏㄠˇ

Workflow

When Processing Chinese Text:

  1. Detect variant: Simplified (简体) or Traditional (繁体)?
  2. Segment: Break into meaningful units
  3. Analyze: Apply the requested analysis type(s)
  4. Report: Present results with Chinese annotations

Output Format

原文:[original text]
分词:[segmented text with / separators]
关键词:[top 5-10 keywords with relevance scores]
情感:[sentiment label + confidence + key signals]
摘要:[summarized text]
可读性:[score/10 + brief explanation]

Edge Cases

  • Mixed-language text: Handle code-switching naturally ("这个bug太坑了") — don't force Chinese segmentation on English words
  • Internet slang: Recognize common abbreviations (yyds, xswl, nbcs, awsl) and expand for formal analysis
  • Poetry/classical Chinese: Flag as special case — modern NLP rules don't apply; use classical grammar patterns
  • Dialectal text: Flag non-Mandarin text (Cantonese, Shanghainese written forms) — analysis may be unreliable
  • Zero-width characters: Chinese text sometimes contains invisible characters (U+200B, U+FEFF) that affect processing

Common Tasks & Prompts

  • "Analyze the sentiment of this Chinese review"
  • "Extract keywords from this article"
  • "Summarize this Chinese news article in 100 characters"
  • "Rate the readability of this document"
  • "Convert this to Traditional Chinese with pinyin annotation"
  • "Segment this Chinese text and identify named entities"

Comments

Loading comments...