Designer Intelligence Station
v2.1.8Designer intelligence collection tool. Monitors 46 public sources (AI/hardware/mobile/design), dynamic quality-based filtering v2.1.8, generates structured d...
⭐ 3· 296·0 current·0 all-time
byLIGO@15217172098
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (design intelligence collector) match the files and runtime instructions: multiple fetchers (web/rss/api), source list, caching, filtering and report templates. No unrelated credentials, binaries, or cloud access are requested.
Instruction Scope
SKILL.md and INSTALL.md limit behavior to fetching public pages, local caching, filtering and generating Markdown reports. They explicitly instruct reviewing data/default_sources.json and doing manual test runs before enabling automation. Note: docs mention optional 'browser' tool for JS rendering and an 'auto-send'/cron workflow — enabling automation or optional browser rendering increases what the agent will do and should be reviewed before use.
Install Mechanism
No install spec bundled with untrusted downloads; standard pip install -r requirements.txt is recommended. The repo contains many scripts and cached data but no external arbitrary downloader in the manifest. pip installs from PyPI are normal but can execute package install-time code—inspect/lock requirements if you want stricter controls.
Credentials
The skill declares no required environment variables, no secrets, and no config paths outside its workspace. That aligns with its stated purpose of scraping public sources. A historical note: changelog references optional integrations/other skills, but those are not required envs here.
Persistence & Privilege
always:false and user-invocable:true (default) — the skill does not force inclusion or request elevated system privileges. It stores data under its own directories (data/ and temp/). The main remaining persistence action is optional cron configuration, which is user-controlled.
Assessment
This skill appears coherent for local intelligence collection, but follow these safety steps before enabling automation: 1) Inspect data/default_sources.json and tools/web_fetcher.py (and web_fetcher_standalone.py) to confirm only desired domains are listed and that fetch logic uses requests/bs4 rather than hidden remote endpoints. 2) Open and read execute_daily.sh and tools/check_dependencies.py to see any auto-install behavior (pip install -r requirements.txt will run package installers). 3) Run a manual test (./execute_daily.sh) in an isolated environment (container/VM) and monitor outgoing network connections (netstat/tcpdump) to ensure only expected domains are contacted. 4) Keep cron disabled until you verify outputs and caches (data/cache/, temp/) are acceptable. 5) Optionally pin and audit requirements.txt packages before installing to reduce supply-chain risk. If you need, I can summarize files of highest interest (execute_daily.sh, web_fetcher*.py, check_dependencies.py) and point to exact lines to review.Like a lobster shell, security has layers — review code before you run it.
latestvk9714wzv69aamxpd8p3xr3z0gd8451v9
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
