Smart Auto Updater
WarnAudited by ClawScan on May 10, 2026.
Overview
This skill is not clearly malicious, but it can make recurring automatic changes to OpenClaw and installed skills based on AI risk decisions whose documented safeguards are incomplete and partly inconsistent.
Before using this skill, set it to report-only mode, do not enable cron until you have tested it, and manually review updates before applying them. The documented risk scoring should be fixed because HIGH risk cannot be reached as written. If you use webhooks, send reports only to trusted channels and keep webhook URLs secret.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A low-risk classification could cause installed software or skills to change without the user reviewing the exact update first.
The skill explicitly delegates update execution to its own risk decision. Updating OpenClaw or skills can mutate the user's agent/runtime environment, but the artifacts do not require explicit approval for each update or define rollback, allowlists, or version pinning.
| Risk Level | Action | ... | **LOW** | Auto-update, send summary |
Use report-only mode by default, require manual confirmation before applying updates, restrict updates to an allowlist, and document rollback steps before enabling automatic updates.
Users may trust the updater to detect and block high-risk changes, but the documented formula cannot produce the HIGH category.
The scoring inputs are 1-3 and the weights sum to 1.0, so the maximum possible score is 3.0. A HIGH threshold above 3.5 is unreachable, which contradicts the advertised HIGH-risk skip behavior and weakens the claimed safety guarantees.
Architecture Impact (1-3) ... Security Impact (1-3) ... HIGH: Total score > 3.5
Fix the scoring thresholds before use, add tests showing HIGH-risk cases are actually blocked, and avoid relying on the advertised safety guarantees until the methodology is corrected.
A malicious or compromised changelog could try to influence the AI to classify a risky update as safe.
The skill plans to place update changelog content directly into the LLM prompt that drives the update decision. Changelogs and diffs are untrusted external content, and the artifacts do not specify prompt-injection handling or instruction/data separation.
Analyze the following changelog and assess the risk level:
{changelog}Treat changelog and diff text as untrusted data, instruct the model to ignore embedded instructions, use deterministic checks where possible, and require human approval when untrusted release text affects update decisions.
If enabled, the updater may continue checking and potentially applying updates on a schedule after the initial setup.
The integration guide shows how to create persistent scheduled runs. This is disclosed and purpose-aligned for maintenance, but it keeps the updater operating until the user disables the cron job.
openclaw cron add \ --name "Smart Auto-Update (Daily)" \ --cron "0 9 * * *" ... --message "Run smart update check"
Only add the cron job after testing in report-only mode, document how to disable it, and periodically review scheduled jobs.
Update and environment details may be shared with Slack, Discord, Feishu, or whoever controls the configured webhook.
The skill supports sending reports to external messaging webhooks. This is disclosed and purpose-aligned, but reports may include installed skill counts, version information, changelog details, and operational status.
SMART_UPDATER_CHANNELS="feishu,discord" ... SMART_UPDATER_SLACK_WEBHOOK="https://hooks.slack.com/services/xxx"
Use only trusted webhook destinations, treat webhook URLs as secrets, and choose a report level that does not disclose unnecessary system details.
