Twitter Scraper
v0.1.2Scrapes public Twitter/X profiles and recent tweets using browser automation with anti-detection and optional profile discovery via Google or DuckDuckGo.
⭐ 1· 1.4k·9 current·9 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The descriptive purpose is a browser-automated scraper built with Playwright (Python) and chromium, but the registry-level requirements list no binaries, no install steps, and there are no code files. The skill claims CLI commands (discover/scrape) and persistent data directories; however, nothing in the package would actually provide those binaries or scripts. This is internally inconsistent — a production scraper would legitimately need Python, Playwright, and a browser binary, yet those are not declared or provided.
Instruction Scope
The SKILL.md instructs the agent to run CLI commands, read/edit config/scraper_config.json, create and write queue/output/thumbnails directories, download thumbnails, use residential proxies, optionally call Google Custom Search API, and 'auto-dismiss' login overlays. Those operations require local binaries, network access, credentials, and file system write privileges. The instructions give broad runtime behaviors (anti-detection, fingerprint spoofing, proxy use) but the skill package provides no code or declared binaries to perform them — the instructions therefore grant broad authority without the supporting artifacts.
Install Mechanism
There is no install specification (instruction-only), which is lowest-risk in isolation, but problematic here because the SKILL.md itself documents a non-trivial runtime stack (python3, chromium, Playwright, proxy configuration). The absence of an install step or source repository means an agent following the instructions might attempt to pull or execute third-party code ad hoc — the mismatch increases the chance of unclear or unsafe runtime behavior.
Credentials
The skill references optional Google API credentials and residential proxy providers (e.g., BrightData) and expects saving of local files, but the declared required env vars/credentials are none. That omission is a red flag: the runtime clearly needs API keys and potentially proxy credentials, yet the skill does not declare them. Asking users to provide such secrets without clear declaration or handling details is disproportionate and risky.
Persistence & Privilege
The skill does not request 'always: true' and is user-invocable; autonomous invocation is allowed (platform default) but not a separate privilege here. The skill does expect to write to local data directories (data/queue, data/output, thumbnails), which is normal for a scraper but should be noted by the user.
What to consider before installing
This SKILL.md describes a heavy-weight Playwright/Python scraper but the package contains no code, no install instructions, and declares no dependencies or credentials — that's inconsistent. Before installing or using this skill, ask the publisher for: (1) the source repository or packaged installer, (2) a clear install spec that declares required binaries (python3, Playwright, chromium), (3) an explicit list of environment variables/credentials it will use (Google API key, proxy credentials) and where/how they are stored, and (4) a reproducible CLI or binary that implements the advertised commands. Do not supply API keys or proxy credentials until you can verify the source code and installation steps. Also consider legal and policy implications of scraping X/Twitter and of using residential proxy providers; ensure you have permission and compliance controls. If the publisher cannot provide source or an installable artifact, treat this skill as unusable and potentially unsafe.Like a lobster shell, security has layers — review code before you run it.
latestvk973mhsh8yc71m3m1pfx0cgj1981wsfq
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
