Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
X (Twitter) Data Scraper
v1.0.1X (Twitter) data extraction and analysis. Use when user asks to "get tweets from @username", "search X for", "analyze Twitter data", "fetch tweets about [top...
⭐ 0· 50·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's name/description match the included code (API client, search, user timeline, and a Playwright scraper). However registry metadata claims no required env vars or credentials while README/SKILL.md/CODE expect an X_BEARER_TOKEN or credential files; CAPABILITY.md also references username/password and cookie files. These undocumented credentials in metadata are an incoherence.
Instruction Scope
Runtime instructions and code instruct creating/reading credential files (~/.openclaw/credentials/x_api_tokens.env) and the Playwright script loads cookies from a hardcoded path (/root/cookies_fixed.json). CAPABILITY.md also references /root/.openclaw/credentials/x_login_credentials.env and COOKIES_FILE=/root/cookies.json. The skill therefore reads local secret files and a cookies file; these operations go beyond simple 'call the API' and could expose sensitive tokens/cookies if misused. There are also mismatches in referenced script names (CAPABILITY.md refers to fetch_user_tweets.py which is absent).
Install Mechanism
This is an instruction-only skill with no install spec (no downloaded archives or package installs). Included code files run as-is; there is no installer that pulls remote binaries. That lowers install-time risk, but executing the scripts will run network and file I/O.
Credentials
Although registry metadata lists no required env vars, the code and docs expect X_BEARER_TOKEN (and optionally cookie/login credentials). CAPABILITY.md suggests username/password and a cookies file. Asking for bearer tokens or cookies is reasonable for a Twitter scraper, but the skill's metadata failing to declare them is incoherent. The code also respects HTTPS_PROXY which could be misconfigured to route traffic through an attacker-controlled proxy if an operator sets it. The hardcoded /root cookie path suggests privileged file access assumptions.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or agent-wide settings. It simply contains scripts that run when invoked. No elevated platform privileges are declared.
What to consider before installing
What to consider before installing/running:
- The skill's metadata claims no credentials, but its README/SKILL.md and code require an X API bearer token (X_BEARER_TOKEN) and the Playwright scraper expects a cookies file. This mismatch is a red flag: the registry info should list required secrets. Treat the package as requiring secrets even if metadata omits them.
- The Playwright scraper loads cookies from a hardcoded root path (/root/cookies_fixed.json) and CAPABILITY.md references /root paths and a login credentials file. Running these scripts with real cookies or account credentials can expose those secrets; do not point them at your primary account credentials. Prefer using a throwaway/test account or only use API tokens with limited scope.
- The code accepts HTTPS_PROXY / https proxy environment variables. If you set a proxy, traffic (including bearer tokens) could be routed through that proxy—ensure it is trusted.
- There are several documentation mismatches (missing script names, differing credential filenames). This indicates sloppy packaging; ask the publisher for clarification or a corrected manifest before trusting the skill with sensitive credentials.
- If you still want to use it: inspect and run the code in an isolated environment (sandbox, VM, or container), avoid providing your primary X/Twitter credentials, and prefer using only the API bearer token stored in a limited-permission place. Remove or change hardcoded paths (/root/...) to a safe location you control.
What would reduce concern: a corrected registry manifest that explicitly lists required env vars (X_BEARER_TOKEN), a clear explanation of cookie usage and the exact cookie file path, removal of hardcoded /root paths (or explicit documentation explaining why they are used), and an owning homepage/author identity you can verify.Like a lobster shell, security has layers — review code before you run it.
latestvk97cwg51f59e9va6x3dsg43sv983ngatscrapervk979dn8ntbrcjae4cj3065me8583mf1dtwittervk979dn8ntbrcjae4cj3065me8583mf1dxvk979dn8ntbrcjae4cj3065me8583mf1d
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
