Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Sentiment Radar

v1.0.0

Multi-platform sentiment monitoring and analysis for products/brands/topics. Collect public opinions from Chinese platforms (小红书/XHS via MediaCrawler) and En...

0· 349·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name/description (multi-platform sentiment monitoring) matches what the included scripts do (XHS crawler integration, Douyin scraping, analysis). However the skill metadata declared no required env/config items while the runtime instructions and code expect several local artifacts (MediaCrawler repo, MEDIA_CRAWLER_PATH, ~/.mcporter/xpoz/tokens.json, and a Chrome instance with CDP). This mismatch between declared requirements and actual runtime needs is an incoherence.
!
Instruction Scope
Runtime instructions and scripts instruct the agent/user to run a third‑party crawler (MediaCrawler) in CDP mode using the user's Chrome browser (QR login/scan), modify the crawler's config file, connect to a local Chrome CDP endpoint (localhost:9222), and read/write JSON data produced by those tools. Using CDP with the user's browser can expose browser session state (cookies, logged-in sessions) to the crawler; the skill asks you to modify config files in the MediaCrawler repo. These operations are within the stated scraping/analysis purpose but are privacy-sensitive and should be flagged to non-technical users.
Install Mechanism
There is no packaged installer (lower risk). The SKILL.md recommends cloning a GitHub repo (github.com/NanmiCoder/MediaCrawler) and installing Playwright — both are normal for web scraping. No obscure downloads or URL-shortened/external binary fetches are used in the instructions. The absence of an install spec in registry metadata is inconsistent with the fact that the skill relies on external projects, but the install steps themselves are from common sources.
!
Credentials
The skill metadata lists no required credentials, but the instructions expect access to: (1) MediaCrawler installation path (MEDIA_CRAWLER_PATH or specific locations), (2) mcporter/Xpoz OAuth token file at ~/.mcporter/xpoz/tokens.json for Twitter/Reddit access, and (3) a local Chrome instance with CDP enabled. Requesting or relying on locally stored OAuth tokens and a user's browser debugging endpoint is proportionate to scraping/sentiment analysis, but it's not declared in metadata and exposes sensitive local credentials/session state—this mismatch is a red flag.
Persistence & Privilege
The skill does not request always:true and does not attempt to modify agent-wide configuration. It updates configuration files within the third-party MediaCrawler repo (which is expected for that workflow) but does not persistently alter other skills or platform settings.
What to consider before installing
This skill appears to do what it says (scrape XHS/Douyin and analyze comments), but it expects local tools and credentials that are not declared in the registry metadata. Before installing or running: - Understand that the crawler runs in CDP mode against your Chrome browser (localhost:9222) and may access browser session data — consider using a dedicated browser profile or a disposable VM/container. - The workflow expects an Xpoz/mcporter tokens.json file for Twitter/Reddit access and a MediaCrawler repo in a local path; these are sensitive credentials/files—verify their presence and contents and avoid pointing the skill at production credentials you care about. - Review the referenced third-party repo (https://github.com/NanmiCoder/MediaCrawler) yourself to confirm behavior and safety before cloning and running. - The analyze script includes IP-location and comment excerpts in reports; ensure you are permitted to process and share any PII that may appear. - If you want to proceed, run the tooling in an isolated environment (VM/container) and avoid reusing your main browser profile for CDP scraping. If you need, provide the repo URLs and the content of any external token files and I can help you inspect them for risky behavior.

Like a lobster shell, security has layers — review code before you run it.

latestvk97azyzk59h66wzcveck5fby0h81t9avmonitoringvk97azyzk59h66wzcveck5fby0h81t9avsentimentvk97azyzk59h66wzcveck5fby0h81t9avtwittervk97azyzk59h66wzcveck5fby0h81t9avxiaohongshuvk97azyzk59h66wzcveck5fby0h81t9av

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments