Twitter Intel
Twitter keyword search, monitoring, and trend analysis via GraphQL
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 153 · 1 current installs · 2 all-time installs
byLucius Pang@PHY041
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (Twitter keyword search, monitoring, trend analysis) align with the runtime instructions: GraphQL searches, filtering, monitoring, saving state and generating reports. Required binaries (python3) and TWITTER_COOKIES_PATH are coherent with the intended scraping-based approach.
Instruction Scope
Instructions explicitly tell the agent to load a browser cookies JSON, perform GraphQL requests (including re-extracting query IDs from Twitter's main.*.js), and read/write monitor/state files. Reading the cookies file is necessary for the described method but involves high-sensitivity credentials; the guidance to fetch/parse Twitter client JS and to reuse cookies gives the agent broad ability to act as an authenticated browser client.
Install Mechanism
The registry has no install spec, but SKILL.md tells users to pip install a pre-release package (rnet>=3.0.0rc20 --pre) and to download rnet_twitter.py from a third-party GitHub repo. This relies on unvetted or pre-release code the user must fetch and run, increasing risk compared to using vetted packages or official APIs.
Credentials
Only TWITTER_COOKIES_PATH is requested (no API keys), which is proportionate to cookie-based scraping. However, a cookies file contains authentication tokens equivalent to credentials; handing this to a skill (and allowing it to read/write local state files) is high-impact and should be treated as sensitive.
Persistence & Privilege
always:false (normal). The skill will create local monitor/state directories and saved daily JSON files per its workflow — expected for monitoring/trending. Note: if the agent is allowed to invoke skills autonomously, that capability combined with access to cookies increases potential blast radius, though autonomy itself is the platform default.
What to consider before installing
This skill is internally consistent with its stated purpose, but it requires your browser auth cookies and asks you to install third-party/pre-release code. Before installing or using it: 1) Inspect rnet_twitter.py and the rnet package source (or only install from trusted releases); 2) Keep the cookies file on disk with tight permissions and consider using a disposable browser/profile; 3) Prefer official API credentials where possible instead of browser cookies; 4) Be cautious about allowing the agent to run this autonomously — it could perform authenticated actions using your cookies; 5) If you proceed, run the tool in an isolated environment (VM/container) and rotate/clear cookies if you detect unexpected behavior.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download zipintelligencelatestmonitoringsearchtwitter
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🔍 Clawdis
OSmacOS · Linux
Binspython3
EnvTWITTER_COOKIES_PATH
SKILL.md
Twitter Intel — Keyword Search & Trend Monitor
Search Twitter by keyword, collect high-engagement tweets, analyze trends over time, and generate structured reports. Powered by rnet_twitter.py GraphQL search (no browser automation needed).
Architecture
Phase 1: On-demand Search (user-triggered)
User says "search OpenAI on twitter" -> search -> filter -> report
Phase 2: Keyword Monitoring (cron-driven)
Config defines keywords -> scheduled search -> diff with last run -> alert on new high-engagement tweets
Phase 3: Trend Analysis (on-demand or weekly)
Aggregate saved searches -> group by week -> detect topic shifts -> generate narrative
Prerequisites
# Install rnet (Rust HTTP client with TLS fingerprint emulation)
pip install "rnet>=3.0.0rc20" --pre
# Required files:
# 1. rnet_twitter.py — lightweight async Twitter GraphQL client
# Get it: https://github.com/PHY041/rnet-twitter-client
# 2. twitter_cookies.json — your auth cookies
# Format: [{"name": "auth_token", "value": "..."}, {"name": "ct0", "value": "..."}]
# Get cookies: Chrome DevTools → Application → Cookies → x.com
# Cookies expire ~2 weeks. Refresh when you get 403 errors.
Set TWITTER_COOKIES_PATH env var to your cookies file location.
Phase 1: On-demand Search
When user says "search [keyword] on twitter", "twitter intel [topic]", "find tweets about [X]":
Step 1 — Run Search
import asyncio, os
from rnet_twitter import RnetTwitterClient
async def search(query, count=200):
client = RnetTwitterClient()
cookies_path = os.environ.get("TWITTER_COOKIES_PATH", "twitter_cookies.json")
client.load_cookies(cookies_path)
tweets = await client.search_tweets(query, count=count, product="Top")
return tweets
Search modes:
| Mode | product= | Use case |
|---|---|---|
| High-engagement | "Top" | Find influential tweets, content analysis |
| Real-time | "Latest" | Monitor breaking discussions, live tracking |
Useful Twitter search operators:
| Operator | Example | Effect |
|---|---|---|
lang:en | OpenAI lang:en | English only |
since: / until: | since:2026-01-24 until:2026-02-24 | Date range |
-filter:replies | OpenAI -filter:replies | Original tweets only |
min_faves:N | min_faves:50 | Minimum likes (only works with Latest) |
from: | from:karpathy | Specific author |
"exact" | "AI agent" | Exact phrase |
Step 2 — Filter & Enrich
After raw search, filter for quality:
filtered = [
t for t in tweets
if keyword.lower() in t["text"].lower()
and (t["favorite_count"] >= 10 or t["retweet_count"] >= 5)
and not t["is_reply"]
]
Step 3 — Report
Output a structured summary:
## Twitter Intel: [keyword]
**Period:** [date range] | **Tweets found:** N | **After filter:** N
### Top Tweets (by engagement)
1. @author (X likes, Y RTs, Z views) — date
"tweet text..."
[link]
### Key Themes
- Theme 1: [description] (N tweets)
- Theme 2: [description] (N tweets)
### Notable Authors
| Author | Followers | Tweets in set | Total engagement |
Phase 2: Keyword Monitoring (Cron)
Config File
{
"monitors": [
{
"id": "my-product-en",
"query": "MyProduct lang:en -filter:replies",
"product": "Top",
"count": 100,
"min_likes": 10,
"alert_threshold": 100,
"enabled": true
}
]
}
State File
{
"my-product-en": {
"last_run": "2026-02-24T12:00:00Z",
"last_tweet_ids": ["id1", "id2"],
"total_collected": 450
}
}
Cron Workflow
- Read config -> iterate enabled monitors
- For each monitor:
- Run
search_tweets(query, count, product) - Filter by
min_likes - Diff against
last_tweet_ids-> find NEW tweets only - If any new tweet has
favorite_count >= alert_threshold-> immediate alert - Save all new tweets to daily file
{monitor_id}/YYYY-MM-DD.json - Update state file
- Run
- Send summary notification (if there are new notable tweets)
Phase 3: Trend Analysis
When user says "analyze twitter trend for [keyword]", "twitter trend report":
- Load all saved daily files from
{monitor_id}/ - Group tweets by week
- For each week, extract:
- Total tweet count + total engagement
- Top 5 tweets by likes
- Dominant themes (use LLM to categorize)
- New authors that appeared
- Sentiment shift
- Generate a week-by-week narrative
Commands
| User Says | Agent Does |
|---|---|
/twitter-intel [keyword] | Search + filter + report (Top, 200 tweets) |
/twitter-intel "[phrase]" --latest | Search Latest mode |
monitor "[keyword]" on twitter | Add to monitoring config |
twitter intel status | Show all active monitors + last run |
twitter trend report [keyword] | Analyze saved data, generate trend narrative |
refresh twitter cookies | Guide user through cookie refresh |
Technical Notes
- SearchTimeline requires POST (GET returns 404) — handled by
rnet_twitter.py - GraphQL query IDs rotate — if search returns 404, re-extract from
https://abs.twimg.com/responsive-web/client-web/main.*.js - Rate limits: ~300 requests/15min window. With 20 tweets per page, 200 tweets = 10 requests. Safe for cron every 4 hours.
- Cookie lifetime:
auth_tokenexpires after ~2 weeks. Monitor for 403 errors.
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
