AI Launch Pipeline
PassAudited by ClawScan on May 11, 2026.
Overview
The skill appears to do what it claims, but it will make external web requests, can drive a browser for screenshots, and saves monitoring outputs locally.
Before installing, review the RSS feed configuration, install only the Python dependencies you need, start with `--skip-screenshot` if you want less browser activity, and enable cron scheduling only if you want the pipeline to keep running automatically.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the pipeline can contact public RSS feeds, DuckDuckGo, and product/news pages, and may open pages in a headless browser for screenshots.
The documented workflow intentionally performs external feed fetching, web search enrichment, and optional screenshot capture; these are purpose-aligned but involve third-party web access.
RSS monitoring → product search → screenshot capture → trend analysis
Run it only when you are comfortable with those external requests, and use `--skip-screenshot` if you do not want browser-based page visits.
You may need to install Python packages yourself, which means package source and version selection are your responsibility.
The skill relies on manually installed packages rather than an install spec or lockfile; this is disclosed and normal for a Python utility, but dependency provenance is left to the user.
PyYAML — `pip install pyyaml`; Playwright (optional, for screenshots) — `pip install playwright && playwright install chromium`
Install dependencies from trusted package indexes, consider pinning versions, and only install Playwright if you need screenshots.
Generated files may accumulate records of monitored launches and external page content on your machine.
The skill persists retrieved launch data, deduplication state, screenshots, and reports locally; this is expected for monitoring, but the stored content originates from external feeds and pages.
data/seen_ids.json # Dedup state ... enriched_launches.json ... screenshots/*.png ... analysis/launch_analysis_report.md
Keep output directories scoped to the skill, review generated reports before relying on them, and delete local outputs if you no longer need the monitoring history.
If you enable the cron example, the pipeline may run daily and continue making web requests and writing outputs.
The skill documents an optional recurring automation path; it is explicit and purpose-aligned, but users should remember that a cron schedule continues until removed.
Pair with OpenClaw cron for automated daily runs: schedule: { kind: "cron", expr: "0 8 * * *" }Only enable scheduling intentionally, and keep track of how to disable the cron job if you no longer want recurring monitoring.
