Back to skill
Skillv1.0.0

ClawScan security

Sentiment Radar · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousFeb 25, 2026, 2:09 PM
Verdict
suspicious
Confidence
high
Model
gpt-5-mini
Summary
The skill's code and instructions broadly match a social-sentiment scraper, but there are important inconsistencies and privacy-sensitive behaviors (browser CDP usage, undeclared token/config paths) that you should understand before installing.
Guidance
This skill appears to do what it says (scrape XHS/Douyin and analyze comments), but it expects local tools and credentials that are not declared in the registry metadata. Before installing or running: - Understand that the crawler runs in CDP mode against your Chrome browser (localhost:9222) and may access browser session data — consider using a dedicated browser profile or a disposable VM/container. - The workflow expects an Xpoz/mcporter tokens.json file for Twitter/Reddit access and a MediaCrawler repo in a local path; these are sensitive credentials/files—verify their presence and contents and avoid pointing the skill at production credentials you care about. - Review the referenced third-party repo (https://github.com/NanmiCoder/MediaCrawler) yourself to confirm behavior and safety before cloning and running. - The analyze script includes IP-location and comment excerpts in reports; ensure you are permitted to process and share any PII that may appear. - If you want to proceed, run the tooling in an isolated environment (VM/container) and avoid reusing your main browser profile for CDP scraping. If you need, provide the repo URLs and the content of any external token files and I can help you inspect them for risky behavior.

Review Dimensions

Purpose & Capability
noteThe name/description (multi-platform sentiment monitoring) matches what the included scripts do (XHS crawler integration, Douyin scraping, analysis). However the skill metadata declared no required env/config items while the runtime instructions and code expect several local artifacts (MediaCrawler repo, MEDIA_CRAWLER_PATH, ~/.mcporter/xpoz/tokens.json, and a Chrome instance with CDP). This mismatch between declared requirements and actual runtime needs is an incoherence.
Instruction Scope
concernRuntime instructions and scripts instruct the agent/user to run a third‑party crawler (MediaCrawler) in CDP mode using the user's Chrome browser (QR login/scan), modify the crawler's config file, connect to a local Chrome CDP endpoint (localhost:9222), and read/write JSON data produced by those tools. Using CDP with the user's browser can expose browser session state (cookies, logged-in sessions) to the crawler; the skill asks you to modify config files in the MediaCrawler repo. These operations are within the stated scraping/analysis purpose but are privacy-sensitive and should be flagged to non-technical users.
Install Mechanism
okThere is no packaged installer (lower risk). The SKILL.md recommends cloning a GitHub repo (github.com/NanmiCoder/MediaCrawler) and installing Playwright — both are normal for web scraping. No obscure downloads or URL-shortened/external binary fetches are used in the instructions. The absence of an install spec in registry metadata is inconsistent with the fact that the skill relies on external projects, but the install steps themselves are from common sources.
Credentials
concernThe skill metadata lists no required credentials, but the instructions expect access to: (1) MediaCrawler installation path (MEDIA_CRAWLER_PATH or specific locations), (2) mcporter/Xpoz OAuth token file at ~/.mcporter/xpoz/tokens.json for Twitter/Reddit access, and (3) a local Chrome instance with CDP enabled. Requesting or relying on locally stored OAuth tokens and a user's browser debugging endpoint is proportionate to scraping/sentiment analysis, but it's not declared in metadata and exposes sensitive local credentials/session state—this mismatch is a red flag.
Persistence & Privilege
okThe skill does not request always:true and does not attempt to modify agent-wide configuration. It updates configuration files within the third-party MediaCrawler repo (which is expected for that workflow) but does not persistently alter other skills or platform settings.