Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Sentiment Radar

v1.0.0

Multi-platform sentiment monitoring and analysis for products/brands/topics. Collect public opinions from Chinese platforms (小红书/XHS via MediaCrawler) and En...

0· 423·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for danielwangyy/sentiment-radar.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Sentiment Radar" (danielwangyy/sentiment-radar) from ClawHub.
Skill page: https://clawhub.ai/danielwangyy/sentiment-radar
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install sentiment-radar

ClawHub CLI

Package manager switcher

npx clawhub@latest install sentiment-radar
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name/description (multi-platform sentiment monitoring) matches what the included scripts do (XHS crawler integration, Douyin scraping, analysis). However the skill metadata declared no required env/config items while the runtime instructions and code expect several local artifacts (MediaCrawler repo, MEDIA_CRAWLER_PATH, ~/.mcporter/xpoz/tokens.json, and a Chrome instance with CDP). This mismatch between declared requirements and actual runtime needs is an incoherence.
!
Instruction Scope
Runtime instructions and scripts instruct the agent/user to run a third‑party crawler (MediaCrawler) in CDP mode using the user's Chrome browser (QR login/scan), modify the crawler's config file, connect to a local Chrome CDP endpoint (localhost:9222), and read/write JSON data produced by those tools. Using CDP with the user's browser can expose browser session state (cookies, logged-in sessions) to the crawler; the skill asks you to modify config files in the MediaCrawler repo. These operations are within the stated scraping/analysis purpose but are privacy-sensitive and should be flagged to non-technical users.
Install Mechanism
There is no packaged installer (lower risk). The SKILL.md recommends cloning a GitHub repo (github.com/NanmiCoder/MediaCrawler) and installing Playwright — both are normal for web scraping. No obscure downloads or URL-shortened/external binary fetches are used in the instructions. The absence of an install spec in registry metadata is inconsistent with the fact that the skill relies on external projects, but the install steps themselves are from common sources.
!
Credentials
The skill metadata lists no required credentials, but the instructions expect access to: (1) MediaCrawler installation path (MEDIA_CRAWLER_PATH or specific locations), (2) mcporter/Xpoz OAuth token file at ~/.mcporter/xpoz/tokens.json for Twitter/Reddit access, and (3) a local Chrome instance with CDP enabled. Requesting or relying on locally stored OAuth tokens and a user's browser debugging endpoint is proportionate to scraping/sentiment analysis, but it's not declared in metadata and exposes sensitive local credentials/session state—this mismatch is a red flag.
Persistence & Privilege
The skill does not request always:true and does not attempt to modify agent-wide configuration. It updates configuration files within the third-party MediaCrawler repo (which is expected for that workflow) but does not persistently alter other skills or platform settings.
What to consider before installing
This skill appears to do what it says (scrape XHS/Douyin and analyze comments), but it expects local tools and credentials that are not declared in the registry metadata. Before installing or running: - Understand that the crawler runs in CDP mode against your Chrome browser (localhost:9222) and may access browser session data — consider using a dedicated browser profile or a disposable VM/container. - The workflow expects an Xpoz/mcporter tokens.json file for Twitter/Reddit access and a MediaCrawler repo in a local path; these are sensitive credentials/files—verify their presence and contents and avoid pointing the skill at production credentials you care about. - Review the referenced third-party repo (https://github.com/NanmiCoder/MediaCrawler) yourself to confirm behavior and safety before cloning and running. - The analyze script includes IP-location and comment excerpts in reports; ensure you are permitted to process and share any PII that may appear. - If you want to proceed, run the tooling in an isolated environment (VM/container) and avoid reusing your main browser profile for CDP scraping. If you need, provide the repo URLs and the content of any external token files and I can help you inspect them for risky behavior.

Like a lobster shell, security has layers — review code before you run it.

latestvk97azyzk59h66wzcveck5fby0h81t9avmonitoringvk97azyzk59h66wzcveck5fby0h81t9avsentimentvk97azyzk59h66wzcveck5fby0h81t9avtwittervk97azyzk59h66wzcveck5fby0h81t9avxiaohongshuvk97azyzk59h66wzcveck5fby0h81t9av
423downloads
0stars
1versions
Updated 15h ago
v1.0.0
MIT-0

Sentiment Radar

Multi-platform social media sentiment collection and analysis.

Supported Platforms

PlatformMethodAuth Required
小红书 (XHS)MediaCrawler (CDP browser)QR code login
TwitterXpoz MCP (xpoz.getTwitterPostsByKeywords)OAuth token
RedditXpoz MCP (xpoz.getRedditPostsByKeywords)OAuth token

Prerequisites

MediaCrawler (for 小红书)

If not installed:

git clone https://github.com/NanmiCoder/MediaCrawler ~/.openclaw/workspace/skills/media-crawler
cd ~/.openclaw/workspace/skills/media-crawler
uv sync
playwright install chromium

Config: config/base_config.py — set ENABLE_CDP_MODE = True, SAVE_DATA_OPTION = "json"

Xpoz MCP (for Twitter/Reddit)

Requires mcporter with Xpoz OAuth configured. Token at ~/.mcporter/xpoz/tokens.json.

Workflow

Step 1: Define targets

Identify products/brands and search keywords. Example:

Products: Plaud录音笔, 钉钉闪记, 飞书录音豆
Keywords (XHS): Plaud录音笔,钉钉闪记,飞书妙记,AI录音笔评测,录音豆
Keywords (Twitter): Plaud NotePin, DingTalk recorder, Lark voice

Step 2: Collect data

XHS collection

Run MediaCrawler with keywords. Use CDP mode (user's Chrome browser) for anti-detection. The crawler needs QR code scan for login — run in background with exec(background=true).

cd skills/media-crawler
# Update keywords in config/base_config.py, then:
.venv/bin/python main.py --platform xhs --lt qrcode

Environment fixes for macOS:

export MPLBACKEND=Agg
export PATH="/usr/sbin:$PATH"

Data output: data/xhs/json/search_contents_YYYY-MM-DD.json and search_comments_YYYY-MM-DD.json

Twitter/Reddit collection

Use Xpoz MCP tools directly:

  • xpoz.getTwitterPostsByKeywords — returns posts with engagement metrics
  • xpoz.getRedditPostsByKeywords — returns posts with comments

Step 3: Analyze

Run the analysis script on collected data:

python3 scripts/analyze.py \
  --data ./data \
  --products '{"Plaud": ["plaud","notepin"], "钉钉": ["钉钉","dingtalk","闪记"]}' \
  --output report.md

The script performs:

  • Keyword distribution analysis (notes per keyword, total likes/collects)
  • Product mention frequency in comments
  • Sentiment classification (positive/negative/concern/neutral)
  • Top notes ranking by engagement
  • Price/subscription complaint extraction
  • Product comparison comment extraction

Step 4: Report

The analysis outputs:

  1. JSON results to stdout (for programmatic use)
  2. Markdown report to --output path

Combine XHS + Twitter data into a comprehensive report. See references/report-template.md for structure.

Key Analysis Dimensions

  1. Sentiment split — positive vs negative vs concern ratio
  2. Product mentions — which products get discussed most
  3. Pricing complaints — subscription fatigue, value perception
  4. Comparison comments — head-to-head user opinions
  5. User pain points — feature requests, complaints, unmet needs
  6. Engagement metrics — likes, collects, shares as popularity signals

Notes

  • XHS data uses Chinese number format (e.g., "1.1万") — parse_count() in analyze.py handles this
  • MediaCrawler has 2s sleep between requests to avoid rate limiting
  • Each keyword returns ~20 notes per page (configurable in MediaCrawler config)
  • Comments are fetched per note automatically
  • For recurring monitoring, schedule via cron and compare against previous reports

Comments

Loading comments...