Back to skill
Skillv1.0.0
ClawScan security
X CDP Automation · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignFeb 26, 2026, 9:05 AM
- Verdict
- benign
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The code and runtime instructions match the stated purpose (automating X via Chromium/CDP) and do not request unrelated credentials or contact external endpoints, but the tool runs local browser automation and auto-installs node packages at runtime so you should review and run it in an isolated profile or environment.
- Guidance
- What to consider before installing/using this skill: - Review the scripts locally before running. The tool will operate the browser using your profile, so it can act with whatever X session (cookies/tokens) is present. - Prefer creating and using an isolated Chromium profile (not your main profile) via the --profile and --port flags to avoid exposing other sessions or data. - Run with --dry-run first to verify behavior; dry-run saves a screenshot under /tmp so you can confirm the composed content won't be posted. - The setup auto-installs puppeteer-core into /tmp via npm. Consider installing puppeteer-core yourself (verify version) and setting NODE_PATH instead of letting the script auto-install. - Because the script spawns Chromium with a user-data-dir, any attacker or misused script could post as you — only run this on machines you control and avoid using your primary account if you want extra safety. - If you want stronger isolation, run this inside a disposable VM or container and/or inspect /tmp/node_modules after installation to ensure no unexpected packages were installed. - Limit agent autonomy: require explicit user confirmation before posting (the SKILL.md describes an approval step — keep that in place). Do not grant blanket autonomous invocation to the agent without oversight. - If unsure, test on a secondary or throwaway X account first.
Review Dimensions
- Purpose & Capability
- okName/description, SKILL.md, and the shipped scripts all implement browser-driven posting, replying, quote-retweeting and article publishing via Chromium CDP. No unrelated env vars, cloud credentials, or hidden network endpoints are requested — the required artifacts (Chromium, puppeteer-core, profile dirs) are proportionate to the claimed purpose.
- Instruction Scope
- noteThe instructions and scripts drive a real browser using an explicit user-data-dir profile and will use whatever logged-in X session is present. They read local files (images, body files), create profile directories under ~/chromium-profiles, save dry-run screenshots to /tmp, and may auto-run npm to install puppeteer-core. All of this is consistent with browser automation but implies access to session cookies and any data in that profile (so the tool can act as your account).
- Install Mechanism
- noteThere is no formal install spec, but setup.js will auto-install puppeteer-core using npm into /tmp/node_modules (execSync running npm). npm installs from the public registry (moderate, traceable risk). This is expected for Node-based automation, but auto-installing at runtime into /tmp and modifying module.paths increases the attack surface if /tmp is untrusted or the registry/package were compromised.
- Credentials
- okThe skill asks for no environment variables or external credentials. It does rely on local Chromium profiles and existing logged-in sessions (which implicitly grant access to your X account). That behavior is proportional to the stated goal but is sensitive — the scripts can use cookie/session state to post as the user.
- Persistence & Privilege
- okalways: false and no system-wide modifications are requested. The skill will create profile directories (~ /chromium-profiles) and write to /tmp, and it launches Chromium with a user-data-dir. It does not change other skills or global agent config. These are reasonable for browser automation but are persistent on disk.
