Install
openclaw skills install neverdieYour OpenClaw should never have zero LLMs. NeverDie protects against the silent killer — every model in your fallback chain going down at once. It enforces p...
openclaw skills install neverdieEnsures OpenClaw survives model failures by enforcing provider-diverse fallback chains, deploying a standalone monitor (no LLM required), and alerting via Telegram.
Never stack 3+ models from the same provider in a row. Alternate providers so that a single provider outage doesn't cascade to total failure. Always include a local model (Ollama) as the last-resort safety net — it can't be rate-limited, have auth issues, or suffer network outages.
Good chain: anthropic/claude-haiku-4-5 → openai/gpt-4.1-mini → ollama/llama3.2:3b
Bad chain: anthropic/claude-haiku-4-5 → anthropic/claude-sonnet-4-6 → anthropic/claude-opus-4-6
Read ONLY the model chain from ~/.openclaw/openclaw.json (do NOT read or output API keys, tokens, or auth config):
primary and fallbacks for provider diversitynode -e "
const cfg = JSON.parse(require('fs').readFileSync(process.env.HOME + '/.openclaw/openclaw.json', 'utf8'));
const m = cfg.agents.defaults.model;
console.log('Primary:', m.primary);
console.log('Fallbacks:', JSON.stringify(m.fallbacks));
const providers = [m.primary, ...m.fallbacks].map(id => id.split('/')[0]);
const unique = [...new Set(providers)];
console.log('Providers:', unique.join(', '));
if (unique.length < 2) console.log('WARNING: All models from same provider!');
if (!providers.includes('ollama')) console.log('WARNING: No local Ollama fallback!');
"
Security note: This script only outputs model IDs and provider names. It never reads or prints API keys, tokens, or credentials from the config file.
Ensure at least 2 different cloud providers + 1 local (Ollama) in the chain. Recommended pattern:
{
"primary": "anthropic/claude-haiku-4-5",
"fallbacks": [
"openai/gpt-4.1-mini",
"ollama/llama3.2:3b",
"nvidia/moonshotai/kimi-k2.5"
]
}
Rules:
Copy the parameterized monitor to the workspace:
cp ~/.openclaw/workspace/skills/neverdie/scripts/fallback-monitor.js ~/.openclaw/workspace/fallback-monitor.js
chmod +x ~/.openclaw/workspace/fallback-monitor.js
The monitor reads config from ~/.openclaw/workspace/.neverdie-config.json:
{
"telegramBotToken": "YOUR_BOT_TOKEN",
"telegramChatId": "YOUR_CHAT_ID",
"cooldownMinutes": 15,
"timezone": "UTC",
"hostname": "my-openclaw"
}
Telegram is optional. Without it, the monitor still writes alerts to .fallback-alert-latest.json and stdout.
If no config file exists, it falls back to environment variables:
NEVERDIE_TELEGRAM_TOKENNEVERDIE_TELEGRAM_CHAT_IDAdd a systemEvent cron entry (NOT agentTurn — it must work when all LLMs are down).
Use the full absolute path to the deployed monitor (not ~/):
{
"id": "<generate-uuid>",
"agentId": "main",
"name": "NeverDie Fallback Monitor",
"enabled": true,
"createdAtMs": <now>,
"updatedAtMs": <now>,
"schedule": {
"kind": "every",
"everyMs": 300000,
"anchorMs": <now>
},
"sessionTarget": "isolated",
"wakeMode": "now",
"payload": {
"kind": "systemEvent",
"text": "exec:node /home/USER/.openclaw/workspace/fallback-monitor.js"
},
"delivery": {
"mode": "announce",
"channel": "session",
"bestEffort": true
},
"state": {}
}
Ask the user for their Telegram bot token and chat ID, then write ~/.openclaw/workspace/.neverdie-config.json.
To get these:
@BotFather on Telegram → /newbot → copy the tokenhttps://api.telegram.org/bot<TOKEN>/getUpdates to find the chat IDTelegram is optional — the monitor works without it (file + stdout alerts only).
# Check status
node ~/.openclaw/workspace/fallback-monitor.js --status
# Send a test Telegram alert
node ~/.openclaw/workspace/fallback-monitor.js --test
# Normal run (scan logs)
node ~/.openclaw/workspace/fallback-monitor.js
When the user asks for NeverDie status, run node ~/.openclaw/workspace/fallback-monitor.js --status and also check:
openclaw.json and assess provider diversityjobs.json? Enabled? Last run status?curl -s --max-time 3 http://localhost:11434/api/tags | node -e "
let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{
try{const r=JSON.parse(d);console.log('Ollama:',r.models.map(m=>m.name).join(', '))}
catch(e){console.log('Ollama: NOT REACHABLE')}
})
"
| Pattern | Severity | Meaning |
|---|---|---|
All models failed | CRITICAL | No LLM available at all |
overloaded | WARNING | Provider temporarily overloaded |
rate limit / 429 | WARNING | Rate limited, using fallbacks |
authentication_error | CRITICAL | Bad API key |
LLM request timed out | WARNING | Timeout, may be transient |
ECONNREFUSED / network errors | WARNING | Provider unreachable |
.neverdie-config.json at runtime, never in skill filesopenclaw.json, never API keys or credentialsfs, path, https)api.telegram.org, and only when Telegram is explicitly configured by the usersystemEvent cron job, completely independent of LLM availability