Smart Auto Updater Litiao
v1.0.0Smart auto-updater with AI-powered impact assessment. Checks updates, analyzes changes, evaluates system impact, and decides whether to auto-update or just r...
⭐ 0· 101·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (auto-updater) align with the instructions: it calls OpenClaw/ClawHub commands, fetches changelogs/diffs, runs LLM analysis, and makes update decisions. Optional notification webhooks and scheduling (cron) are reasonable for this purpose.
Instruction Scope
SKILL.md and references are instruction-only and stay within updater scope (check updates, analyze changelogs, decide, report). Minor concerns: (1) references include webhook env vars and log file paths — these are expected but mean reports/changelogs may be sent/stored externally; (2) the risk-scoring examples contain inconsistent numeric/label mappings (some example scores and thresholds don't match the stated HIGH/MEDIUM/LOW boundaries), which could cause unexpected decisions unless clarified.
Install Mechanism
No install spec and no code files (instruction-only). Low installer risk — nothing is downloaded or written by the skill itself per metadata.
Credentials
No required env vars are declared in registry metadata, which is fine for optional config. However the docs list many optional env vars (MODEL, AUTO_UPDATE, RISK_TOLERANCE, multiple webhook URLs, log file path). Webhook URLs and similar channels are sensitive (secrets) and should be provided deliberately; the skill does not require unrelated credentials, so the requested envs are proportionate but should be treated as sensitive by the user.
Persistence & Privilege
always:false and model invocation enabled (normal). The skill recommends scheduling via cron (persistent execution) but does not request forced always-on presence or modification of other skills/configs.
Assessment
This skill appears to do what it says, but take these precautions before enabling it: (1) run in dry-run mode and review several reports to confirm the AI scoring matches your expectations (there are small inconsistencies in the example scoring tables); (2) supply webhook URLs only for trusted endpoints and avoid posting full changelogs containing sensitive data to public channels; (3) ensure log files and the ~/.config smart env file have restrictive permissions so secrets aren't exposed; (4) schedule runs in isolated sessions or a staging environment first to confirm auto-update behavior; (5) if you rely on the LLM analysis, review and possibly tune the provided prompts and thresholds to match your risk appetite.Like a lobster shell, security has layers — review code before you run it.
latestvk977cv9mjyrvamzrsb9yjwfpz1833bd2
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
