Install
openclaw skills install skrollerClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Automate scraping and filtering of public social media posts with keyword search, engagement filters, deduplication, and export to JSON, CSV, or notes apps.
openclaw skills install skrollerAutomate the collection and analysis of publicly available social media content. This skill handles content collection, filtering, and export with compliance safeguards.
Before using this skill:
Data Protection:
| Platform | Approach | Notes |
|---|---|---|
| Twitter/X | Browser automation | Use Playwright; handle login if needed |
| Browser automation | Rate-limited; use sparingly | |
| TikTok | Browser automation | Heavy JS; may need longer waits |
| API + browser | Prefer API where possible | |
| Browser automation | Login required for most content | |
| YouTube | API + browser | Comments via browser, videos via API |
| Product Hunt | Browser automation | Product discovery, launches |
| Medium | Browser automation | Articles, blog posts |
| GitHub | Browser automation | Issues, discussions, repos |
| Browser automation | Visual content, pins |
# Scroll Twitter feed, extract 50 posts about "AI"
node scripts/skroller.js --platform twitter --query "AI" --limit 50 --output posts.json
# Monitor Reddit for brand mentions, export to Markdown
node scripts/skroller.js --platform reddit --query "mybrand" --format markdown --output mentions.md
# Scroll competitor Instagram, capture top posts by engagement
node scripts/skroller.js --platform instagram --profile @competitor --min-likes 1000 --output competitor-posts.json
scripts/skroller.js - Main scrolling engineJavaScript script using Playwright for browser automation.
# Using npm scripts (recommended)
npm run scroll -- --platform twitter --query "AI" --limit 50 --output posts.json
# Direct execution
node scripts/skroller.js --platform twitter --query "AI" --limit 50 --output posts.json
Options:
--platform - Target platform (required): twitter, reddit, instagram, tiktok, linkedin, youtube, producthunt, medium, github, pinterest--query - Search keyword/hashtag--profile - Specific profile to scroll--limit - Max posts to scrape (default: 50)--min-likes - Filter by minimum engagement--format - Output format: json, csv, markdown (default: json)--output - Output file path--screenshot - Capture screenshots for debugging--dedupe - Skip previously seen postsscripts/feed-digest.js - Generate digestsCreates summary digests from exported post data.
npm run digest -- --input posts.json --output digest.md
# or: node scripts/feed-digest.js --input posts.json --output digest.md
scripts/export-to-notes.js - Unified note app exporterExports scraped posts to multiple note applications with a single command.
Supported apps: Bear, Obsidian, Notion, Apple Notes, Evernote, OneNote, Google Keep, Roam Research, Logseq, Anytype
# Using npm scripts (recommended)
npm run export -- --input posts.json --app obsidian --vault ~/Documents/Obsidian
# Direct execution
node scripts/export-to-notes.js --input posts.json --app bear --tags "ai,research"
node scripts/export-to-notes.js --input posts.json --app notion --api-key $NOTION_TOKEN
node scripts/export-to-notes.js --input posts.json --app apple-notes
node scripts/export-to-notes.js --input posts.json --app evernote --output export.enex
node scripts/export-to-notes.js --input posts.json --app one-note --access-token $MS_TOKEN
node scripts/export-to-notes.js --input posts.json --app keep --output keep.html
node scripts/export-to-notes.js --input posts.json --app roam --output roam.md
node scripts/export-to-notes.js --input posts.json --app logseq --vault ~/Documents/Logseq
node scripts/export-to-notes.js --input posts.json --app anytype --output anytype.md
node scripts/export-to-notes.js --input posts.json --app obsidian --dry-run
Configuration: Set defaults in .skroller-config.json:
{
"export": {
"defaultApp": "obsidian",
"vault": "~/Documents/Obsidian",
"folder": "Skroller",
"notionDatabaseId": "<db-id>"
}
}
Requirements by app:
go install github.com/tylerwince/grizzly/cmd/grizzly@latest).skroller-config.jsonStore default settings:
{
"defaultLimit": 50,
"scrollDelayMs": 1500,
"userAgent": "Mozilla/5.0 ...",
"platforms": {
"twitter": {
"loginRequired": false,
"rateLimit": "100 requests/hour"
},
"instagram": {
"loginRequired": true,
"rateLimit": "50 requests/hour"
}
},
"export": {
"defaultFormat": "json",
"includeImages": true,
"includeMetrics": true
}
}
Some platforms require login. Store credentials securely:
# For platforms requiring auth, set environment variables
export SKROLLR_TWITTER_COOKIE="<auth cookie>"
export SKROLLR_INSTAGRAM_USER="<username>"
export SKROLLR_INSTAGRAM_PASS="<password>"
Security note: Never commit auth files. Use .env with .gitignore.
{
"platform": "twitter",
"query": "AI",
"scrapedAt": "2026-03-14T20:30:00Z",
"posts": [
{
"id": "1234567890",
"author": "@username",
"text": "Post content here...",
"timestamp": "2026-03-14T18:00:00Z",
"likes": 150,
"retweets": 42,
"replies": 12,
"url": "https://twitter.com/...",
"media": ["image1.jpg"],
"hashtags": ["#AI", "#ML"]
}
]
}
## Twitter Posts: "AI" (2026-03-14)
### @username - 150 likes
Post content here...
[View post](https://twitter.com/...)
---
"exact phrase"AI AND (startup OR venture)AI -crypto--min-likes 100--min-shares 500--date-from 2026-03-14--date-from 2026-01-01 --date-to 2026-01-31--delay 2000--limit or check for login requirements--screenshot to debug what's visible--delayexport-to-notes.js --app bearexport-to-notes.js --app obsidianexport-to-notes.js --app notion# Scroll and export to Obsidian in one command
node scripts/skroller.js --platform twitter --query "AI" --limit 20 --output ai.json && \
node scripts/export-to-notes.js --input ai.json --app obsidian --vault ~/Documents/Obsidian
# Scroll and export to Notion
node scripts/skroller.js --platform reddit --query "startups" --limit 30 --output startups.json && \
node scripts/export-to-notes.js --input startups.json --app notion --api-key $NOTION_TOKEN
# Use npm scripts for cleaner commands
npm run scroll -- --platform twitter --query "tech" --limit 25 --output tech.json
npm run export -- --input tech.json --app bear --tags "tech,research"
references/platform-details.md - Platform-specific selectors and quirksreferences/rate-limits.md - Rate limit guidelines per platformassets/selector-cheatsheet.md - CSS selectors for each platform