Skill

Monitor the Claude API for outages and latency spikes with rich Telegram alerts. Status monitoring, latency probes, and automatic recovery notifications.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 402 · 0 current installs · 0 all-time installs
bychapati@chapati23
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Requested binaries (python3, crontab, curl), Telegram tokens/IDs, and the OpenClaw gateway token/port are all used by the included scripts. MONITOR_MODEL/PROBE_MODEL/PROBE_AGENT_ID are used to target and tag probes and status filtering. No unrelated cloud credentials or services are requested.
Instruction Scope
Runtime instructions are scoped to (1) polling status.claude.com, (2) probing the local OpenClaw gateway, and (3) sending Telegram messages. The setup script installs cron jobs, writes a single env file under ~/.openclaw/skills/claude-watchdog/, and runs an initial check — all described in SKILL.md. The SKILL.md and setup instruct how to locate the OpenClaw gateway token (reading ~/.openclaw/openclaw.json) — this is required for probes but is sensitive and worth conscious consent from the user.
Install Mechanism
This is instruction-only (no external install/download). Setup.sh writes config and installs cron jobs. No third-party packages or arbitrary downloads are performed. Cron-based persistence is the expected mechanism for periodic monitoring.
Credentials
Env vars requested map to the functionality (Telegram + gateway + probe/status tuning). Minor inconsistency: TELEGRAM_TOPIC_ID is declared as a required env in the registry metadata but treated as optional in SKILL.md and the scripts. PROBE_MODEL and PROBE_AGENT_ID also have sensible defaults in code despite being listed in the required envs. The OpenClaw gateway token is sensitive but justified by the probe design.
Persistence & Privilege
The skill does not request always:true and does not alter other skills' configuration. It installs user-level cron jobs and stores its own config/state under ~/.openclaw/skills/claude-watchdog/ with permissions set to 600 — this is a reasonable level of persistence for a monitoring tool.
Assessment
This skill appears to do what it claims, but review and confirm before installing: 1) You will give the skill your OpenClaw gateway token (sensitive) which it stores in ~/.openclaw/skills/claude-watchdog/claude-watchdog.env; make sure you are comfortable storing that token and that the file permissions remain restrictive (setup sets 600). 2) The setup installs cron jobs that run every 15 minutes — back up your existing crontab if you want to review changes first. 3) The registry metadata marks TELEGRAM_TOPIC_ID (and some other vars) as required, but the scripts treat them as optional with defaults — expect minor metadata/documentation mismatch. 4) The scripts only contact status.claude.com, your local OpenClaw gateway (localhost), and the Telegram Bot API; verify you are comfortable those endpoints receive the minimal probe/status data. If any of these points are concerning, inspect the three scripts directly and/or run setup.sh interactively and review the written env file before allowing cron installation.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.2.1
Download zip
latestvk973p98zkp4w3kt42bbkvbfrth826wrr

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🐕 Clawdis
Binspython3, crontab, curl
EnvTELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID, TELEGRAM_TOPIC_ID, OPENCLAW_GATEWAY_TOKEN, OPENCLAW_GATEWAY_PORT, MONITOR_MODEL, PROBE_MODEL, PROBE_AGENT_ID
Primary envTELEGRAM_BOT_TOKEN

SKILL.md

Claude Watchdog 🐕

Monitor the Anthropic/Claude API for outages and latency spikes. Sends rich alerts to Telegram — no agent tokens consumed for status checks.

What It Does

Status Monitor (status-check.py)

  • Polls status.claude.com every 15 minutes via cron
  • Alerts with incident name, latest update text, per-component status
  • Tags incidents as "(not our model)" if e.g. Haiku is affected but you use Sonnet
  • Sends all-clear on recovery
  • Zero token cost

Latency Probe (latency-probe.py)

  • Sends a minimal request through OpenClaw's local gateway every 15 minutes
  • Measures real end-to-end latency to Anthropic API
  • Maintains rolling baseline (median of last 20 samples)
  • Alerts with 🟡/🟠/🔴 severity based on spike magnitude
  • Sends all-clear when latency recovers
  • ~$0.000001 per probe

Setup

Run the interactive setup script:

bash /path/to/skills/claude-watchdog/scripts/setup.sh

You'll need:

  1. Telegram Bot Token — from @BotFather
  2. Telegram Chat ID — send a message to your bot, then check https://api.telegram.org/bot<TOKEN>/getUpdates
  3. OpenClaw Gateway Token — run:
    python3 -c "from pathlib import Path; import json; print(json.load(open(Path.home() / '.openclaw/openclaw.json'))['gateway']['auth']['token'])"
    
  4. Gateway Port — default 18789

The setup script writes config, installs cron jobs, and runs an initial check.

To uninstall (removes cron jobs, optionally config/state):

bash /path/to/skills/claude-watchdog/scripts/setup.sh --uninstall

Config

Stored in ~/.openclaw/skills/claude-watchdog/claude-watchdog.env. To reconfigure, either re-run setup.sh or edit this file directly — changes take effect on the next cron run (within 15 minutes).

TELEGRAM_BOT_TOKEN=...
TELEGRAM_CHAT_ID=...
OPENCLAW_GATEWAY_TOKEN=...
OPENCLAW_GATEWAY_PORT=18789
MONITOR_MODEL=sonnet
PROBE_MODEL=openclaw
PROBE_AGENT_ID=main
VariableDefaultDescription
TELEGRAM_BOT_TOKEN(required)Telegram bot token from @BotFather
TELEGRAM_CHAT_ID(required)Target chat for alerts
OPENCLAW_GATEWAY_TOKEN(required)Auth token for the local OpenClaw gateway
OPENCLAW_GATEWAY_PORT18789Port the OpenClaw gateway listens on
MONITOR_MODELsonnetModel name to match in status incidents (e.g. "sonnet", "haiku")
PROBE_MODELopenclawModel alias sent to the gateway for latency probes. openclaw uses the gateway's default model routing
PROBE_AGENT_IDmainValue of the x-openclaw-agent-id header sent with probes
FILTER_KEYWORDS(none)Comma-separated keywords to filter out of status alerts (e.g. "skills,Artifacts,Memory"). Empty = receive all alerts

Scripts also accept these as environment variables (env file takes priority).

Security Note

The env file contains sensitive tokens (Telegram bot token, gateway token). The setup script sets permissions to 600 (owner-only read/write). If you create or edit the file manually, ensure restricted permissions:

chmod 600 ~/.openclaw/skills/claude-watchdog/claude-watchdog.env

Alert Examples

Status incident:

🟠 Anthropic Status: Partially Degraded Service

📌 Elevated error rates on Claude 3.5 Haiku (not our model)
Status: Investigating
Update: "We are investigating increased error rates..."

Components:
  🟠 API: partial outage

🔗 https://status.claude.com

Latency spike:

🟡 Anthropic API — High Latency Detected

Current: 12.3s
Baseline: 3.1s (median of last 19 samples)
Ratio: 4.0×

Slow responses are expected right now.

Recovery:

✅ Anthropic API — Latency Back to Normal

Current: 2.8s
Baseline: 3.1s
Was: 12.3s when alert fired

State & Logs

All state and log files are stored in ~/.openclaw/skills/claude-watchdog/:

FilePurpose
claude-watchdog-status.jsonStatus check state
claude-watchdog-latency.jsonLatency probe state & samples
claude-watchdog-status.logStatus check log
claude-watchdog-latency.logLatency probe log

Tuning Thresholds

Edit constants at the top of latency-probe.py:

ConstantDefaultMeaning
ALERT_MULTIPLIER2.5Alert if latency > N× baseline median
ALERT_HARD_FLOOR10.0sAlways alert above this absolute threshold
RECOVER_MULTIPLIER1.5Clear alert when below N× baseline
BASELINE_WINDOW20Rolling sample window size
BASELINE_MIN_SAMPLES5Minimum samples before alerting starts
PROBE_TIMEOUT45sGive up on probe after this long

Requirements

  • Python 3.10+ (stdlib only, no pip dependencies)
  • OpenClaw gateway running locally
  • Telegram bot with access to the target chat

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…