AI Intelligence Hub - Real-time Model Capability Tracking
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill does not show malware, but it overstates its real-time benchmark capability and includes automation examples that can change OpenClaw model settings without a review step.
Use this skill cautiously as a recommendation aid, not as an authoritative real-time benchmark source. Do not enable the auto-configuration or budget-switching examples until you have verified the data source, validated the selected model, and know how to revert your OpenClaw model settings.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Users may trust recommendations as current leaderboard intelligence when they are based on hardcoded sample data, potentially changing model routing or spending based on inaccurate information.
The main data-fetching implementation says the real HuggingFace parsing is still TODO and returns mock data, while the skill is described as real-time benchmark tracking.
# TODO: 实际实现需要解析 HuggingFace Space 的数据 ... # 这里先提供模拟数据 ... mock_data = {Label the skill as using sample/offline data until real fetching is implemented, include verifiable source timestamps, and avoid unsupported cost-saving claims.
Future OpenClaw runs could use an unintended, lower-quality, or invalid model setting until the user notices and reverts it.
The example automatically changes the default OpenClaw model based on command output, without a confirmation, validation, or rollback step.
EFFICIENT_MODEL=$(python3 skills/model-benchmarks/scripts/run.py recommend --task general --sort efficiency | head -1) ... openclaw config set agents.defaults.model.primary "$EFFICIENT_MODEL"
Require explicit approval before changing global model configuration, parse structured JSON output, validate the chosen model, show old/new values, and document rollback commands.
If installed as a cron job, the skill will keep running on schedule and writing logs until the user removes it.
The skill suggests user-created scheduled execution for daily updates; it is disclosed, but it is persistent automation.
# Add this to your crontab to automatically optimize model selection ... python3 "$SKILL_DIR/scripts/run.py" fetch >> "$LOG_FILE" 2>&1
Only add the cron job if you want recurring execution, review the schedule, and set up log rotation or removal instructions.
It is harder to verify provenance or expected runtime requirements before running the included script.
The skill includes runnable code but lacks a declared source repository, homepage, install spec, or required Python binary declaration.
Source: unknown; Homepage: none; No install spec — this is an instruction-only skill; Code file presence: scripts/run.py
Publish a source repository/homepage, declare Python as a runtime requirement, and document exactly which optional tools such as jq, curl, or bash are needed for examples.
Model cost or routing information may be sent to Slack, and a leaked webhook URL could allow unauthorized posting to that Slack channel.
An optional example posts model cost-change alerts to a Slack webhook using a user-provided webhook URL.
curl -X POST -H 'Content-type: application/json' --data "{\"text\":\"🚨 AI Model Cost Alert: $COST_CHANGES\"}" "$SLACK_WEBHOOK_URL"Store webhook URLs securely, limit what is included in alerts, and avoid posting sensitive usage or spend details unless the Slack workspace is appropriate for that data.
