Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Sentiment Analysis Monitor

v1.0.0

Sentiment Analysis Monitor — AI-powered social media sentiment monitoring & analysis tool. Monitors Xiaohongshu, Douyin, Weibo, WeChat Official Accounts for...

0· 20·0 current·0 all-time
byYK-Global@billjamno58

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for billjamno58/sentiment-analysis-monitor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Sentiment Analysis Monitor" (billjamno58/sentiment-analysis-monitor) from ClawHub.
Skill page: https://clawhub.ai/billjamno58/sentiment-analysis-monitor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install billjamno58/sentiment-analysis-monitor

ClawHub CLI

Package manager switcher

npx clawhub@latest install sentiment-analysis-monitor
Security Scan
Capability signals
CryptoCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (monitor Chinese social platforms and run sentiment analysis) aligns with the included scripts and requirements (Playwright, BeautifulSoup, jieba, requests). However, the code launches Playwright via embedded Node.js scripts (Node.js required) while the registry only lists Python Playwright in requirements and declares no required binaries — that runtime dependency is not declared and is necessary for the skill to work.
!
Instruction Scope
SKILL.md describes scraping public pages with Playwright, storing results in a local SQLite DB, sending Feishu/email alerts, and calling a GLM-4 API for sentiment. The implementation includes anti-detection techniques (headless browsing, UA rotation, delays) and code that executes Node.js Playwright snippets via subprocess. The SKILL.md and registry do not clearly document the need for Node, Playwright browser installation steps, or explicit configuration of the GLM key / Feishu webhook / SMTP; these are operational gaps. The billing module will contact an external billing endpoint (skillpay.me) and expects billing env vars (documented in the billing.py header) even though the skill metadata lists no required env vars.
!
Install Mechanism
There is no install spec (instruction-only), but a requirements.txt is present. The code expects Node.js and browser binaries for Playwright (the scripts spawn Node.js to run embedded JS), yet the skill does not declare Node or provide 'playwright install' instructions. That mismatch increases the chance the skill will fail or behave unexpectedly, and it means additional manual steps with elevated installation privileges may be required.
!
Credentials
Registry declares no required env vars, but the shipped code references/uses several credentials and endpoints: billing expects SKILL_BILLING_API_KEY, SKILL_BILLING_SKILL_ID, FEISHU_USER_ID (documented in billing.py header), GLM API key is stored in config (glm_api_key), and Feishu webhook / SMTP credentials are held in config.json. The billing module will POST to skillpay.me and may disclose a user identifier. The lack of declared env vars in metadata is an incoherence and means the skill may prompt for or read secrets not advertised.
Persistence & Privilege
The skill is not always-enabled and does not claim elevated platform privileges. It creates a directory under the user's home (~/.sentiment-compass) and a local SQLite DB to store scraped data and analyses, which is consistent with its purpose. This local persistence is expected but should be noted (sensitive data stored on disk).
What to consider before installing
What to consider before installing: - Missing runtime requirements: The code runs Playwright via embedded Node.js scripts but the skill does not declare Node.js or browser installation steps. Ensure Node and Playwright browsers are installed and understand that extra installation is required. - Undeclared credentials and endpoints: billing.py documents SKILL_BILLING_API_KEY, SKILL_BILLING_SKILL_ID, and FEISHU_USER_ID (and the code calls https://skillpay.me). The registry lists no required env vars — ask the author to clarify what secrets/endpoints the skill needs and why. Only supply credentials you trust and avoid reusing high-privilege keys. - GLM API & notification credentials: the GLM API key, Feishu webhook, and SMTP config are stored in a config file in your home directory (~/.sentiment-compass/config.json). These are sensitive; decide whether storing them locally is acceptable. - Anti-detection scraping: the skill implements headless browsing, UA rotation and other evasion tactics. Scraping may violate platform terms of service and could trigger IP/blocking issues; consider legal/ethical implications and rate limits. - Isolation and audit: run the skill in a controlled environment (VM/container) until you verify behavior. Review billing endpoints (skillpay.me) and any network calls; if you do not trust the source, do not provide billing or notification credentials. - Ask questions / seek fixes: request from the author (or maintainer) explicit install instructions (Node version, playwright install steps), an accurate list of required env vars, and why skillpay.me is used for billing. If those are not provided or the answers are unsatisfactory, treat the skill as untrusted.

Like a lobster shell, security has layers — review code before you run it.

latestvk977rydy33m8f05d4z2mve8j3s85e66p
20downloads
0stars
1versions
Updated 5h ago
v1.0.0
MIT-0

Sentiment Analysis Monitor

AI-powered social media sentiment monitoring and analysis tool for Chinese platforms. Monitor keyword mentions across Xiaohongshu, Douyin, Weibo, and WeChat Official Accounts in real time.

Features Overview

FeatureDescription
Platform MonitoringXiaohongshu, Douyin, Weibo, WeChat Official Account keyword search
AI Sentiment AnalysisPositive / Neutral / Negative + reason summary
Sentiment ReportsTotal mentions, sentiment ratio, trending charts, top posts
Auto AlertsFeishu/email push when negative mentions exceed threshold
Scheduled CrawlingOpenClaw Cron for periodic scraping
StorageLocal SQLite + JSON

Key: No official platform APIs required — pure Playwright scraping of public content.


Quick Start

Add Keyword Monitoring

User: Monitor keyword "brand_name" on Xiaohongshu and Douyin
User: Add sentiment monitoring for "product_name", platforms: Weibo + WeChat Official Account

→ Parse keyword and platforms → Create monitoring task → Execute first crawl → Return result summary

View Sentiment Report

User: Show sentiment report for "brand_name"
User: How is "competitor_name" trending in the last 7 days?

→ Return structured report: total mentions, positive/neutral/negative ratios, trending charts, top post list

Set Alert Rules

User: Set negative alert for "brand_name", threshold 10 posts/day, notify me when exceeded
User: Configure Feishu alert, push to "Operations Group"

→ Configure negative threshold and push channel → Auto-judge after each crawl

Manage Monitoring Tasks

User: List my sentiment monitoring tasks
User: Delete monitoring for "brand_name"
User: Pause monitoring for "competitor_name"

Tiered Features

FeatureFREEPRO
Keywords1Unlimited
PlatformsXiaohongshuAll 4 platforms
Daily limit50Unlimited
Data history7 daysUnlimited
Sentiment reportsYes
Priority monitoringYes

Platform Monitoring Details

Xiaohongshu (XHS)

  • Search URL: https://www.xiaohongshu.com/search_result?keyword={keyword}&source=web_explore_search
  • Anti-detection: Playwright headless, UA rotation, random delay 3~8s
  • Content extracted: Note title, body, author, likes/bookmarks/comments count, publish time

Douyin

  • Search URL: https://www.douyin.com/search/{keyword}
  • Anti-detection: Playwright headless, scroll simulation, lazy-load handling
  • Content extracted: Video title, author, likes/comments/shares count, publish time

Weibo

  • Search URL: https://s.weibo.com/weibo?q={keyword}&typeall=1
  • Anti-detection: Playwright headless, UA rotation
  • Content extracted: Post body, author, reposts/comments/likes count, publish time

WeChat Official Accounts

  • Search URL: https://weixin.sogou.com/weixin?type=2&query={keyword}
  • Anti-detection: Playwright headless
  • Content extracted: Article title, abstract, account name, read count, publish time

Sentiment Analysis

Chinese semantic sentiment analysis via GLM-4 API:

Input: Post body / comment content
Output:
  sentiment: "positive" | "neutral" | "negative"
  score: -1.0 ~ 1.0 (negative to positive)
  reason: Brief reason summary

Classification rules:

  • Positive: score > 0.1
  • Neutral: -0.1 <= score <= 0.1
  • Negative: score < -0.1

Alert Rules

RuleDescription
Negative thresholdTrigger when daily negative mentions exceed N (default: 5)
Trend alertTrigger when negative rate increases > 20% week-over-week
Push channelsFeishu group bot / Email (SMTP)

Feishu Alert Message Template

Sentiment Alert | {keyword}
Time: {time}
Today's Negatives: {negative_count} (threshold: {threshold})
Negative Rate: {negative_rate}%
Latest Negative Posts:
- {title} — {platform} @{author}

Usage Examples

Example 1: Brand Sentiment Monitoring

User: Monitor "coffee brand" on Xiaohongshu and Douyin, crawl every day at 9am

→ Create task → Return confirmation → Next Cron trigger executes first crawl

Example 2: Competitor Negative Alert

User: Alert me via Feishu when negative posts appear for "competitor"

→ Set negative threshold alert → Configure Feishu group bot → Auto-push when threshold exceeded

Example 3: Sentiment Report

User: Generate this week's sentiment report for "brand_name"

→ Query local SQLite for this week's data → AI generate summary → Return Markdown report


Core Scripts

See scripts/sentiment.py for full implementation:

from scripts.sentiment import SentimentCompass

compass = SentimentCompass(tier="PRO")

# ─── Add keyword monitoring ──────────────
compass.add_keyword(
    keyword="brand_name",
    platforms=["xhs", "douyin", "weibo", "wechat"],
    frequency="daily",      # 6h/12h/daily/weekly
    priority=1,             # 1=high priority (Pro only)
)

# ─── Execute crawl (manual) ──────────────
results = compass.crawl_keyword("brand_name")

# ─── Sentiment analysis (single) ─────────
analysis = compass.analyze_sentiment("This product is really great, highly recommended!")
# → {"sentiment": "positive", "score": 0.85, "reason": "Contains positive words like 'great' and 'highly recommended'"}

# ─── Batch analysis (save API calls) ─────
batch = compass.batch_analyze([
    "Product is great, worth buying",
    "Quality is terrible, not worth the price at all",
    "It's okay, just average",
])
for item in batch:
    print(f"[{item['sentiment']}] {item['text'][:30]}")

# ─── Generate report ─────────────────────
report = compass.generate_report(keyword="brand_name", days=7)
print(report["summary"])   # AI-generated text summary
print(report["stats"])      # Statistical data

# ─── Check alerts ───────────────────────
alerts = compass.check_alerts(keyword="brand_name")
if alerts:
    compass.send_feishu_alert(alerts)

# ─── List tasks ─────────────────────────
tasks = compass.list_tasks()
for t in tasks:
    print(f"  {t['keyword']} — {t['platforms']} — {t['status']}")

Technical Implementation

  • Crawler: Playwright (headless) for dynamic pages, UA rotation, random delay 3~8s
  • AI Analysis: GLM-4 API (open.bigmodel.cn), batch analysis to save tokens
  • Storage: SQLite (~/.sentiment-compass/data.db) + JSON config
  • Scheduling: OpenClaw Cron, supports 6h/12h/daily/weekly frequency
  • Push: Feishu group bot Webhook / Email SMTP

Data Model

-- Monitoring tasks
CREATE TABLE tasks (
    id INTEGER PRIMARY KEY,
    keyword TEXT UNIQUE,
    platforms TEXT,           -- comma-separated: xhs,douyin,weibo,wechat
    frequency TEXT DEFAULT 'daily',
    priority INTEGER DEFAULT 0,
    status TEXT DEFAULT 'active',
    created_at TEXT,
    last_crawl_at TEXT
);

-- Post data
CREATE TABLE posts (
    id INTEGER PRIMARY KEY,
    keyword TEXT,
    platform TEXT,            -- xhs/douyin/weibo/wechat
    post_id TEXT,
    title TEXT,
    content TEXT,
    author TEXT,
    author_id TEXT,
    likes INTEGER DEFAULT 0,
    comments INTEGER DEFAULT 0,
    shares INTEGER DEFAULT 0,
    published_at TEXT,
    fetched_at TEXT,
    url TEXT UNIQUE
);

-- Sentiment analysis results
CREATE TABLE analyses (
    id INTEGER PRIMARY KEY,
    post_id INTEGER REFERENCES posts(id),
    sentiment TEXT,            -- positive/neutral/negative
    score REAL,                -- -1.0 ~ 1.0
    reason TEXT,
    analyzed_at TEXT
);

-- Alert records
CREATE TABLE alerts (
    id INTEGER PRIMARY KEY,
    keyword TEXT,
    alert_type TEXT,           -- threshold/trend
    threshold INTEGER,
    negative_count INTEGER,
    negative_rate REAL,
    triggered_at TEXT,
    notification_sent INTEGER DEFAULT 0
);

FAQ

QuestionAnswer
Will accounts get blocked?Pure public content scraping with 3~8s random delay between requests, 3 retries on failure
Does it support login-gated content?Current version does not support login-required pages
How accurate is sentiment analysis?Based on GLM-4 Chinese semantic understanding; accuracy depends on text length and context
How many keywords can I monitor?FREE=1, PRO=unlimited
How long is data retained?FREE=7 days, Pro+=unlimited
How to configure Feishu alerts?Provide group bot Webhook URL — no app permissions needed

Tier Limits

TIER_LIMITS = {
    "FREE":  {"max_keywords": 1,  "platforms": ["xhs"],                     "daily_limit": 50,  "history_days": 7},
    "PRO":   {"max_keywords": -1, "platforms": ["xhs","douyin","weibo","wechat"], "daily_limit": -1, "history_days": -1, "report": True, "priority": True},
}

Billing

  • Pay-per-call: $0.0100 USDT per execution via SkillPay.me
  • Balance insufficient: Payment URL returned — user tops up at https://skillpay.me/sentiment-analysis-monitor
  • External data flow: FEISHU_USER_ID transmitted to skillpay.me/api/v1/billing for balance charging

Required Environment Variables

VariableDescription
FEISHU_USER_IDUser open_id for billing (passed by Feishu runtime)
SKILL_BILLING_API_KEYSkillPay Builder API Key
SKILL_BILLING_SKILL_IDSkillPay Skill ID (defaults to sentiment-analysis-monitor)

Comments

Loading comments...