Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Model Rate Limit Recovery

v1.0.0

Diagnose and recover from model rate limit errors (ChatGPT usage limits, 429 errors). Use when cron jobs or agent sessions fail with "Try again in ~9500 min"...

0· 63·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for stefanferreira/model-rate-limit-recovery.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Model Rate Limit Recovery" (stefanferreira/model-rate-limit-recovery) from ClawHub.
Skill page: https://clawhub.ai/stefanferreira/model-rate-limit-recovery
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install model-rate-limit-recovery

ClawHub CLI

Package manager switcher

npx clawhub@latest install model-rate-limit-recovery
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (model rate limit recovery for OpenClaw) aligns with the CLI commands and cron/session operations in SKILL.md. Recommending API key rotation, model fallback, and cron updates is coherent for this purpose. However, the skill references provider-specific env vars (OPENAI_API_KEYS, OPENCLAW_LIVE_OPENAI_KEY, ANTHROPIC, DEEPSEEK) but the registry metadata lists no required environment variables — an inconsistency that should be clarified.
!
Instruction Scope
The instructions tell the agent to run commands that enumerate environment variables (env | grep -i OPENAI|ANTHROPIC|DEEPSEEK), grep system logs (/tmp/openclaw/openclaw-*.log), and create persistent scripts under /root/.openclaw/workspace/scripts. Reading env and arbitrary logs can expose unrelated secrets; writing to /root assumes elevated privileges and a particular filesystem layout. Those steps go beyond narrow diagnostics and can touch sensitive data or require admin rights.
Install Mechanism
This is an instruction-only skill with no install spec and no code files — low installation risk because nothing is automatically downloaded or written by the skill bundle itself.
!
Credentials
The SKILL.md instructs exporting and rotating multiple API keys and searching the environment for provider keys, but the skill did not declare any required env vars in its metadata. Asking operators to set or expose OPENAI/ANTHROPIC/DEEPSEEK keys is reasonable for key rotation functionality, but the instructions as-written encourage wide environment access and storage of secrets (including creating highest-priority keys) without guidance to limit key scope or use least-privilege keys.
Persistence & Privilege
The skill does not request always:true and does not itself persist code, but it instructs creating persistent recovery scripts at /root/.openclaw/workspace/scripts and patching cron jobs. That behavior is plausible for a recovery tool but requires write permissions to system/user paths (and uses an explicit root path), which may be unexpected or inappropriate in some environments.
What to consider before installing
This skill appears to implement reasonable recovery steps for model rate limits, but it asks you (via the instructions) to read environment variables and logs and to create persistent scripts under /root. Before installing or running these instructions: 1) Confirm you trust the OpenClaw CLI and any third-party model providers referenced (DeepSeek, Anthropic). 2) Do not export high-privilege or production API keys; prefer limited-scope or test keys for rotation and recovery testing. 3) Inspect and, if needed, modify the recovery script paths so they don't assume /root or write to system-wide locations. 4) Be aware that running env | grep or log greps can reveal unrelated secrets; run such commands in a controlled/sandboxed environment first. 5) Ask the author to declare required env vars in metadata and to provide explicit least-privilege guidance — the mismatch between metadata (no env vars) and instructions (multiple secret env vars) is the primary red flag. If you want, test the procedures in a staging environment before applying to production.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f1jacmnhs4e58p4ysc78g1d84vg7a
63downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Model Rate Limit Recovery Skill

When to Use

  • Cron jobs fail with "⚠️ You have hit your ChatGPT usage limit (free plan). Try again in ~9500 min"
  • Agent sessions fail with 429, rate_limit, quota, or resource exhausted errors
  • Need to recover scheduled tasks after model provider limits are hit
  • Setting up resilient agent workflows with automatic fallbacks

Diagnosis Steps

1. Check Error Type

# Check cron job runs
openclaw cron runs --jobId <job_id>

# Look for error messages containing:
# - "usage limit"
# - "Try again in ~"
# - "429"
# - "rate_limit"
# - "quota"
# - "resource exhausted"

2. Verify Current Configuration

# Check current model configuration
openclaw status | grep -A5 "Model"

# Check environment for API keys
env | grep -i "OPENAI\|ANTHROPIC\|DEEPSEEK"

3. Identify Root Cause

Common causes:

  • Free plan limits: ChatGPT free tier has usage caps
  • No API key rotation: Single key exhausted
  • No fallback model: Default model fails, no alternative
  • Cron jobs using default model: Not specifying resilient model

Recovery Procedures

Immediate Recovery (Manual)

# 1. Run failed task manually with alternative model
openclaw sessions spawn \
  --task "Your task here" \
  --model "deepseek/deepseek-chat" \
  --label "Manual recovery"

# 2. Update cron job to specify model
openclaw cron update --jobId <job_id> --patch '{
  "payload": {
    "kind": "agentTurn",
    "message": "...",
    "model": "deepseek/deepseek-chat",
    "timeoutSeconds": 180
  }
}'

API Key Rotation Setup

# Add multiple API keys for rotation
export OPENAI_API_KEYS="key1,key2,key3"
export OPENAI_API_KEY_1="sk-..."
export OPENAI_API_KEY_2="sk-..."
export OPENCLAW_LIVE_OPENAI_KEY="sk-..."  # Highest priority

# OpenClaw will automatically rotate through keys on rate limits
# 429, rate_limit, quota, resource exhausted → tries next key
# Other failures → fails immediately

Model Fallback Configuration

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "openai-codex/gpt-5.4",
        "fallback": "deepseek/deepseek-chat"
      },
      "models": {
        "openai-codex/gpt-5.4": {
          "params": {
            "maxRetries": 2,
            "retryOnRateLimit": true
          }
        }
      }
    }
  }
}

Cron Job Best Practices

{
  "name": "Resilient Scheduled Task",
  "schedule": {
    "kind": "cron",
    "expr": "0 * * * *",
    "tz": "Africa/Johannesburg"
  },
  "payload": {
    "kind": "agentTurn",
    "message": "Task instructions...",
    "model": "deepseek/deepseek-chat",  // Specify model explicitly
    "timeoutSeconds": 300               // Set reasonable timeout
  },
  "sessionTarget": "isolated",
  "delivery": {
    "mode": "announce"
  }
}

Prevention Strategies

1. Model Selection

  • Primary: openai-codex/gpt-5.4 (when available)
  • Fallback: deepseek/deepseek-chat (no usage limits)
  • Backup: anthropic/claude-sonnet-4-6 (if available)

2. Cron Job Configuration

  • Always specify model in payload
  • Set reasonable timeoutSeconds
  • Use deleteAfterRun: true for one-shot tasks
  • Enable delivery.mode: "announce" for notifications

3. Monitoring

# Regular cron job health checks
openclaw cron list
openclaw cron runs --jobId <job_id> --limit 5

# Check for recent failures
grep -i "usage limit\|429\|rate_limit" /tmp/openclaw/openclaw-*.log

4. Skill Integration

# Create recovery script
cat > /root/.openclaw/workspace/scripts/recover-failed-cron.sh <<'EOF'
#!/bin/bash
JOB_ID="$1"
NEW_MODEL="${2:-deepseek/deepseek-chat}"

# Get failed runs
FAILED_RUNS=$(openclaw cron runs --jobId "$JOB_ID" | grep -c "status.*error")

if [ "$FAILED_RUNS" -gt 0 ]; then
  echo "Recovering $FAILED_RUNS failed runs for job $JOB_ID"
  openclaw cron update --jobId "$JOB_ID" --patch "{\"payload\":{\"model\":\"$NEW_MODEL\"}}"
  openclaw cron run --jobId "$JOB_ID"
fi
EOF
chmod +x /root/.openclaw/workspace/scripts/recover-failed-cron.sh

Templates

Resilient Cron Job Template

See scripts/resilient-cron-template.json

Model Fallback Config

See references/model-fallback-config.json

Recovery Script

See scripts/recover-failed-cron.sh

Notes

  • ChatGPT free plan has strict usage limits (~3 requests/hour)
  • DeepSeek has no usage limits but may have different capabilities
  • API key rotation only works for rate limits (429), not other errors
  • Always verify recovery by checking created files/outputs
  • Document failures and recoveries in memory/ for future reference

References

Comments

Loading comments...