Crawl From X

ReviewAudited by ClawScan on May 1, 2026.

Overview

This appears to be a disclosed X/Twitter scraping skill, with the main user-facing risks being its use of a logged-in browser session and local storage of crawled posts/media.

Before installing, confirm you trust the source, understand that the crawler will use your logged-in X/Twitter browser session, and decide where/how long you want scraped posts and downloaded media retained. For lower risk, use a separate browser profile or X account and avoid cron scheduling unless you truly want unattended daily crawling.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The skill can browse X/Twitter content available to the logged-in account when you run the crawler.

Why it was flagged

The skill uses the user's logged-in X/Twitter browser session to access content, which is expected for this scraper but is still delegated account/session access.

Skill content
在安装了 Browser Relay 扩展的浏览器中登录 X(Twitter) - 技能会使用已登录的会话抓取内容
Recommendation

Use a dedicated browser profile or X account if possible, and only run the crawler when you are comfortable with it using that logged-in session.

What this means

Users have less provenance information for deciding whether to trust the included scripts.

Why it was flagged

The package includes local Python scripts to run, but the registry metadata does not provide a verified source or homepage.

Skill content
Source: unknown; Homepage: none
Recommendation

Review the installed files and prefer installing from a trusted, matching upstream source before running the Python commands.

What this means

Crawled posts and media remain on disk and may later be read or summarized by an agent; that content should be treated as untrusted web/social content.

Why it was flagged

The skill stores retrieved social-media content and downloaded media locally as persistent Markdown and image/video files.

Skill content
posts_YYYYMMDD_HHMMSS.md - 完整内容(Markdown)... images/ - 下载的图片和视频
Recommendation

Keep the results directory private, delete old outputs when no longer needed, and do not treat crawled post text as instructions.

What this means

If you add the cron job, the crawler will keep running on a schedule and continue using the configured browser/session environment.

Why it was flagged

The README documents an optional cron setup for unattended recurring crawls; it is user-directed, not hidden.

Skill content
添加到 crontab,每天早上 8 点运行 ... python3 skills/crawl-from-x/scripts/craw_hot.py crawl
Recommendation

Only add scheduled execution deliberately, and remove the cron entry if you no longer want automatic crawls.