Sentiment Radar
WarnAudited by ClawScan on May 10, 2026.
Overview
Sentiment Radar is mostly aligned with social-media sentiment monitoring, but it uses logged-in browser/OAuth access, unpinned external crawlers, and unsafe keyword handling that should be reviewed before use.
Install only if you are comfortable using dedicated social-media accounts and a dedicated browser profile for crawling. Pin and review MediaCrawler before running it, avoid untrusted keywords until the config-writing issue is fixed, and do not enable cron/background monitoring unless you have set clear limits.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If the agent accepts keywords from an untrusted source, those keywords could potentially cause code to run in the user's environment.
The --keywords value is inserted unescaped into a Python config file before running the crawler. Crafted keywords containing quotes or newlines could alter executable Python configuration.
f'KEYWORDS = "{keywords}"' ... config_file.write_text(content) ... subprocess.run([str(venv_python), "main.py", "--platform", "xhs", "--lt", "qrcode"], ...)Serialize or escape keywords safely, validate allowed characters, and avoid writing untrusted input into Python source before execution.
The skill can attach to a browser debugging session and scrape using the user's browser context, which may exceed what the user expects from the documented workflow.
The included Douyin scraper uses browser CDP automation and explicitly frames the behavior as bypassing API-level blocking. This capability is not described in the SKILL.md supported platform table.
"Douyin search scraper via browser automation (bypasses API-level blocking)." ... browser = await p.chromium.connect_over_cdp("http://localhost:9222")Disclose or remove the Douyin scraper, require explicit user approval before CDP attachment, and use a dedicated browser profile/account for scraping.
Changes in the external repository or its dependencies could affect what code runs with the user's browser/login context.
The skill instructs users to clone and run an unpinned third-party crawler and dependencies, with no commit pin, checksum, or install spec.
git clone https://github.com/NanmiCoder/MediaCrawler ~/.openclaw/workspace/skills/media-crawler ... uv sync ... playwright install chromium
Pin the MediaCrawler commit, provide a reviewed install spec or lockfile, and document exactly which external code and dependencies are trusted.
The crawler or MCP tools may operate through the user's logged-in accounts or browser profile, exposing account sessions to crawler behavior and platform rate/abuse controls.
The skill uses authenticated browser/session access and OAuth tokens, but the registry declares no primary credential or required environment variables, and the scope of browser/token use is not tightly bounded.
| 小红书 (XHS) | MediaCrawler (CDP browser) | QR code login | ... Requires mcporter with Xpoz OAuth configured. Token at `~/.mcporter/xpoz/tokens.json`. ... Use CDP mode (user's Chrome browser) for anti-detection.
Declare the required credentials, document OAuth scopes, and recommend a dedicated browser profile and separate test accounts for crawling.
Recurring crawls could continue collecting data or using logged-in sessions if scheduled without clear limits.
The skill suggests background and recurring execution. This is disclosed and aligned with monitoring, but it can persist beyond a single interactive task if the user enables it.
The crawler needs QR code scan for login — run in background with `exec(background=true)`. ... For recurring monitoring, schedule via cron and compare against previous reports
Only schedule recurring monitoring after explicit user approval, and set clear runtime, account, and output-location limits.
