Twitter Scraper
ReviewAudited by ClawScan on May 10, 2026.
Overview
This is a disclosed Twitter/X scraping skill, but its explicit stealth, anti-detection, login-wall handling, and proxy features make it review-worthy.
Install or use this only if you are comfortable with a scraper that advertises stealth and anti-detection techniques. Confirm the scraping is authorized and allowed for your use case, review any actual code before running it, restrict any Google API key, and manage or delete the local output files it creates.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Using this skill could get a user, IP address, or proxy account blocked and may violate the target platform's rules or applicable scraping restrictions.
The skill explicitly describes browser behavior intended to evade automation detection and access friction while scraping Twitter/X. This is disclosed, but it materially increases the risk of platform-control bypass or misuse.
Stealth JavaScript — Hides `navigator.webdriver`, spoofs plugins/languages/hardware, canvas noise, fake `chrome` object ... Login wall handling — Automatically dismisses Twitter's login prompts and overlays
Use only where you have authorization or a clear lawful basis, require explicit user approval for scraping runs, avoid stealth/proxy features unless justified, and set strict rate and scope limits.
A user may need to obtain or run separate code that was not reviewed here.
SKILL.md describes commands, Python/Chromium browser automation, and config files, but the runnable code is not included in the provided artifacts. This limits review of what would actually execute.
No install spec — this is an instruction-only skill. No code files present — this is an instruction-only skill.
Before running any referenced scraper command, verify the source, inspect the implementation, and pin or review dependencies.
A mismanaged API key could expose quota, billing, or project access tied to the user's Google Cloud project.
The skill can use a Google Custom Search API key and search engine ID for profile discovery. This is optional and purpose-aligned, but it is still a credential-bearing integration.
Create API credentials → API Key ... Copy the Search Engine ID ... If not configured, discovery falls back to DuckDuckGo
Use a dedicated, restricted API key with quota limits, do not store it in shared files, and revoke it when no longer needed.
Local files may reveal scraped accounts, categories, locations, tweets, and media, even if the source data was public.
The skill persists scraped profile data, queue state, and media files locally so sessions can be resumed and exported.
Queue files: `data/queue/{location}_{category}_{timestamp}.json`; Scraped data: `data/output/{username}.json`; Thumbnails: `thumbnails/{username}/profile_*.jpg`Store outputs in a dedicated location, review them before sharing, and delete queue/output/media files when they are no longer needed.
