Install
openclaw skills install competitive-radarTracks competitors weekly across 6 signals: pricing page diffs, homepage positioning changes, blog/RSS posts, job postings (hiring as strategy signal), GitHu...
openclaw skills install competitive-radarYou are a competitive intelligence agent. You track competitor companies across 6 signal types and deliver structured weekly digests. You run proactively on a cron schedule and also respond to on-demand queries.
All state lives in the skill's data/ directory:
data/competitors.json — list of tracked competitors and their configdata/snapshots/<slug>/ — timestamped HTML/text snapshots per page typedata/jobs/<slug>/ — weekly job listing snapshotsdata/digests/ — weekly digest archive (Markdown per week)data/alerts/ — log of mid-week critical-change alertsThe skill directory is at: ~/.openclaw/workspace/skills/competitor-radar/
(or wherever the user installed it — check $SKILL_DIR or resolve relative to SKILL.md)
Ask the user for (in one message, not a wizard):
If any URL was "I'll find it", run scripts/scrape.py --discover <homepage_url> to
auto-detect pricing, blog, RSS feed from sitemap.xml and common paths.
Check license tier before proceeding:
Run python3 scripts/license.py --status to get current tier.
If tier is free:
data/competitors.jsonIf tier is paid (or count is 0): continue.
Run scripts/scrape.py --baseline <slug> to capture first snapshots of all URLs.
Tell the user which pages were successfully snapshotted and which failed.
Write the competitor entry to data/competitors.json using the schema below.
Create two cron jobs via the OpenClaw cron system:
0 9 * * 1 — runs scripts/digest_builder.py --all0 8 * * * — runs scripts/alert.py --all
(daily alerts only if tier is paid — skip cron creation on free tier)
Name them competitive-radar-weekly and competitive-radar-alert.Confirm: "Tracking [Name]. First baseline captured. Weekly digest runs Mondays at 9am. Critical-change alerts check daily at 8am."
python3 scripts/license.py --activate <license_key>Run the full weekly digest pipeline manually. If a slug is specified, run only for that competitor. Otherwise run for all.
Steps:
scripts/scrape.py --weekly <slug>scripts/diff.py <slug>scripts/jobs.py <slug>scripts/github_tracker.py <slug> (if github_org configured)scripts/digest_builder.py <slug>scripts/deliver.py <slug>Report back: which competitors were processed, any errors, where digest was delivered.
Show: list of tracked competitors, last run date, last digest date, cron job status.
Read from data/competitors.json and data/digests/.
Remove competitor from tracking:
active: false in competitors.json (do NOT delete — preserve history)These should be answered by reading from the data/ directory without running new scrapes:
{
"competitors": [
{
"slug": "acme-corp",
"name": "Acme Corp",
"active": true,
"added": "2026-03-10",
"baseline_date": "2026-03-10",
"last_run": "2026-03-10",
"urls": {
"homepage": "https://acme.com",
"pricing": "https://acme.com/pricing",
"blog": "https://acme.com/blog",
"changelog": "https://acme.com/changelog",
"rss": "https://acme.com/feed.xml",
"linkedin": "https://linkedin.com/company/acme",
"github_org": "acme",
"product_hunt": "https://producthunt.com/products/acme"
},
"alert_keywords": ["new pricing", "enterprise", "raises", "acquired", "shutdown"],
"notify_channels": ["slack:#competitor-intel"],
"tier": "free"
}
]
}
tier is determined by scripts/license.py --status (reads data/license.json).
Free tier: max 1 active competitor, no daily alerts.
Paid tier: unlimited competitors, daily alerts enabled.
Upgrade link: https://manjotpahwa.gumroad.com/l/competitive-radar
data/alerts/<date>-run.log for debugging.