Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OpenRouter Cron Migration

v1.0.0

Collaboratively migrate specific OpenClaw cron jobs onto popular OpenRouter models. Audit cron usage, fetch the current OpenRouter rankings via curl, propose...

0· 95·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mrmps/openrouter-crons.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "OpenRouter Cron Migration" (mrmps/openrouter-crons) from ClawHub.
Skill page: https://clawhub.ai/mrmps/openrouter-crons
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install openrouter-crons

ClawHub CLI

Package manager switcher

npx clawhub@latest install openrouter-crons
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The described goal (audit cron usage, query OpenRouter rankings, propose and apply cron → model changes) matches the commands and steps in SKILL.md. Requesting an OpenRouter API key and calling the OpenRouter models API is coherent with the stated purpose.
!
Instruction Scope
The instructions explicitly read user config files (e.g., ~/.openclaw/.env, ~/.openclaw/agents/main/agent/auth-profiles.json, ~/.openclaw/openclaw.json) and reference the OPENROUTER_API_KEY environment variable, but the skill metadata lists no required env vars or config paths. The skill also runs commands that modify cron jobs (openclaw onboard, openclaw cron edit/run). Although it says to get explicit approval, the agent will be instructed to read and change user-managed configs and run jobs — these are sensitive operations and should have been declared.
Install Mechanism
Instruction-only skill with no install spec and no code files. This minimizes disk-write and supply-chain risk; nothing is downloaded or installed by the skill itself.
!
Credentials
The SKILL.md depends on an OpenRouter API key and inspects local OpenClaw configuration files, but the registry metadata declares no required environment variables or config paths. Asking for the OpenRouter key is reasonable for the task, but the lack of declared credentials/configs is an inconsistency and means the agent may request or access secrets unexpectedly.
Persistence & Privilege
The skill does not request always:true and does not include an install that alters other skills. It runs OpenClaw CLI commands that may persist provider onboarding (storing auth) — which is normal for onboarding — but this is limited to the OpenClaw config scope rather than system-wide privileges.
What to consider before installing
This skill appears to do what it says (audit crons, fetch OpenRouter rankings, and migrate approved jobs), but the runtime instructions will read files in your home directory (~/.openclaw/*.env, auth-profiles.json) and expect an OpenRouter API key. Before installing or invoking it: (1) confirm you trust the skill source (owner and homepage are unknown); (2) back up your OpenClaw cron configuration and auth files; (3) prefer to provide any API key via your normal secure mechanism (environment variable or OpenClaw onboarding) rather than pasting it into chat; (4) review which exact commands the agent will run and require explicit approval before any cron edit/run is executed; and (5) consider running the steps manually or in a test environment first. The main inconsistency is that the skill metadata declares no required env vars/config paths even though the instructions access them — treat that as a red flag and ask the publisher to correct the manifest or clarify what the skill will access.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bejf8rjjx7zv71rmndy4jw183qx7d
95downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

OpenRouter Cron Migration & Verification Skill

You are the OpenClaw/OpenRouter tuning partner. Work with the user to decide which cron jobs should move to cheaper OpenRouter models, based on actual cron usage and the current OpenRouter popularity rankings. You do not auto-migrate everything—only the crons the user approves. Every change must be verified (config + live run + cost check).

Key references


Collaboration principles

  1. Check first: confirm the gateway is up and the user has (or can supply) an OpenRouter key.
  2. Usage-driven decisions: gather run history so the user can prioritize expensive/high-frequency jobs.
  3. Live popularity data: always pull the latest ranking data before recommending models. Assume yesterday’s advice is stale.
  4. Offer options: provide at least two (ideally 3–4) popular, inexpensive models from the ranking output and explain trade-offs.
  5. Explicit approval: document which cron → model mappings the user approved before editing.
  6. Verification: after each change, show the updated cron payload, re-list crons, run the cron, and surface any errors.
  7. Cost awareness: optionally check OpenRouter credits/activity so the user sees the impact.

Phase 0: Make sure OpenClaw is reachable

  1. openclaw status → ensure the gateway isn’t “unreachable”. If it is, guide the user to run openclaw gateway install && openclaw gateway run (or launchctl bootstrap …).
  2. Quick health ping: openclaw cron status should return without connection errors before proceeding.

If the gateway stays down, stop and help fix it before touching cron jobs.


Phase 1: Confirm OpenRouter provider access

Step 1.1: Check provider + credentials

openclaw providers list 2>/dev/null | rg -i openrouter || echo "OpenRouter provider missing"
grep -i OPENROUTER ~/.openclaw/.env 2>/dev/null || echo "No OPENROUTER_API_KEY in .env"
cat ~/.openclaw/agents/main/agent/auth-profiles.json 2>/dev/null | rg -i openrouter || echo "No OpenRouter auth profile"

Summarize what you found. If no key is set, ask the user for their OpenRouter API key.

Step 1.2: Onboard (if needed)

openclaw onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"

Fallback: edit ~/.openclaw/openclaw.json or set the env var manually, per the OpenRouter integration doc.

Step 1.3: Verification

openclaw providers list | rg -i openrouter
curl -s https://openrouter.ai/api/v1/models \
  -H "Authorization: Bearer $OPENROUTER_API_KEY" \
  | python3 -m json.tool | head -20 || echo "OpenRouter API call failed"

If the API call fails, stop and resolve authentication before migrating any cron.


Phase 2: Usage-based cron triage

Goal: help the user pick which jobs to move by showing frequency, success rate, and current model.

Step 2.1: Inventory crons

openclaw cron list --json 2>/dev/null | python3 - <<'PY'
import json, sys
raw = sys.stdin.read()
start = raw.find('[') if '[' in raw else raw.find('{')
data = json.loads(raw[start:])
jobs = data if isinstance(data, list) else [data]
print(f"{'Job ID':<12}{'Name':<28}{'Schedule':<16}{'Session':<12}{'Model':<45}")
print('-'*115)
for job in jobs:
    schedule = job.get('schedule', {})
    freq = schedule.get('cron') or schedule.get('expr') or schedule.get('everyMs') or schedule.get('at') or 'unknown'
    model = job.get('payload', {}).get('model', 'agent default')
    session = job.get('session', {}).get('kind', '?')
    print(f"{job.get('id','?'):<12}{job.get('name','?'):<28}{freq:<16}{session:<12}{model:<45}")
PY

Ask the user which of these look expensive or redundant.

Step 2.2: Pull run history for candidates

For each interesting job:

openclaw cron runs <JOB_ID> --limit 25 --json 2>/dev/null | python3 - <<'PY'
import json, sys
from datetime import datetime
runs = [json.loads(line) for line in sys.stdin if line.strip()]
if not runs:
    print('No runs logged.'); exit()
success = sum(1 for r in runs if r.get('status') == 'success')
print(f"Runs analyzed: {len(runs)} · Success: {success}/{len(runs)}")
latencies = [r.get('durationMs', 0) for r in runs if r.get('durationMs')]
if latencies:
    avg = sum(latencies)/len(latencies)
    print(f"Avg duration: {avg/1000:.1f}s · Max: {max(latencies)/1000:.1f}s")
print('Most recent prompts/models:')
for r in runs[:3]:
    print(f"- {datetime.fromisoformat(r['createdAt']).isoformat()} · model={r.get('model','default')} · status={r.get('status')}")
PY

Discuss with the user which jobs run often enough (or cost enough) to justify moving to a cheaper model.

Record the agreed list: job_id -> desired outcome (e.g., “job foo: migrate to cheaper general model”).


Phase 3: Pick popular, inexpensive OpenRouter models

Step 3.1: Fetch live rankings (curl only)

curl -s 'https://openrouter.ai/api/v1/models?orderby=rank' \
  | python3 - <<'PY'
import json, sys
rows = json.load(sys.stdin).get('data', [])
print(f"{'Rank':<5}{'Model ID':<42}{'Provider':<14}{'Context':>8}{'In $/M':>10}{'Out $/M':>10}")
print('-'*100)
for idx, row in enumerate(rows[:20], start=1):
    pricing = row.get('pricing', {})
    prompt = float(pricing.get('prompt','0') or 0)*1_000_000
    completion = float(pricing.get('completion','0') or 0)*1_000_000
    provider = row['id'].split('/',1)[0]
    print(f"{idx:<5}{row['id']:<42}{provider:<14}{row.get('context_length',0):>8}{prompt:>10.2f}{completion:>10.2f}")
PY

This gives you the current popularity order plus price info. Note which of the top ~10 are cheap and suitable (e.g., DeepSeek V3.x, Gemini Flash, GPT-4o mini, Xiaomi MiMo).

Step 3.2: Offer at least three concrete options

For each cron the user wants to migrate:

  • Pair its workload (prompt complexity, tool use, latency requirements) with 3–4 models from the ranking table, prioritizing lower cost.
  • Example script snippet to highlight the top four cheapest popular models:
curl -s 'https://openrouter.ai/api/v1/models?orderby=rank' \
  | python3 - <<'PY'
import json, sys
rows = json.load(sys.stdin).get('data', [])
choices = []
for row in rows:
    pricing = row.get('pricing', {})
    prompt = float(pricing.get('prompt','0') or 0)
    completion = float(pricing.get('completion','0') or 0)
    if prompt == 0 or completion == 0:
        continue
    if prompt*1_000_000 > 1.00:  # skip expensive (> $1/M input) options
        continue
    choices.append((prompt, {
        'id': row['id'],
        'name': row.get('name', row['id']),
        'ctx': row.get('context_length', 0),
        'out': completion
    }))
choices.sort()
print('Top cheap popular models:')
for prompt, info in choices[:4]:
    print(f"- {info['id']} · {info['name']} · ctx {info['ctx']} · ${prompt*1_000_000:.2f}/M in · ${info['out']*1_000_000:.2f}/M out")
PY

Explain why each candidate fits (e.g., “DeepSeek V3.2 ranks #8, great for summaries, ~$0.26/M in”). Ask the user to choose which model each cron should use.

Step 3.3: Finalize migration plan

Write down the explicit approvals, e.g.:

  • daily-news-digestopenrouter/deepseek/deepseek-v3.2
  • rss-monitoropenrouter/google/gemini-2.5-flash-lite

You’ll use this plan in the next phase.


Phase 4: Apply edits + verify immediately

For each approved cron:

  1. Edit the model
    openclaw cron edit <JOB_ID> --model "openrouter/<provider>/<model>"
    
  2. Show the updated payload
    openclaw cron show <JOB_ID> --json | rg -i model
    
  3. Re-list crons (optional summary table reuse from Phase 2) to confirm the new model appears.
  4. Run the cron manually
    openclaw cron run <JOB_ID> --expect-final --timeout 180000
    
    Review output carefully. If the cheaper model fails or quality drops, tell the user and offer to revert (openclaw cron edit <JOB_ID> --model "<previous>").
  5. Log the result: job name, old → new model, run status, observations.

Repeat for every cron in the plan.


Phase 5: Optional — monitor OpenRouter spend

If the user wants visibility into cost impact:

  1. Credits/balance
    curl -s https://openrouter.ai/api/v1/credits \
      -H "Authorization: Bearer $OPENROUTER_API_KEY" | python3 -m json.tool
    
  2. Daily activity
    DATE=$(date +%Y-%m-%d)
    curl -s "https://openrouter.ai/api/v1/activity?date=$DATE" \
      -H "Authorization: Bearer $OPENROUTER_API_KEY" \
      | python3 - <<'PY'
    

import json, sys rows = json.load(sys.stdin).get('data', []) if not rows: print('No activity for this date.'); exit() print(f"{'Model':<45}{'Cost($)':<10}{'Requests':<10}{'Tokens':<14}") print('-'*80) for row in rows: tokens = (row.get('prompt_tokens',0) or 0) + (row.get('completion_tokens',0) or 0) print(f"{row.get('model','?'):<45}{row.get('usage',0):<10.4f}{row.get('requests',0):<10}{tokens:<14}") PY

3. Share the results and note any anomalies (spikes, zero usage, etc.).

---

## Quick reference
- `openclaw status` — confirm gateway reachability
- `openclaw providers list` — ensure OpenRouter provider loaded
- `curl -s https://openrouter.ai/api/v1/models?orderby=rank` — live popularity + price data
- `openclaw cron list --json` — cron inventory
- `openclaw cron runs <JOB_ID> --limit 25 --json` — usage history
- `openclaw cron edit <JOB_ID> --model "openrouter/..."` — set per-cron models
- `openclaw cron run <JOB_ID> --expect-final` — verification run
- `curl -s https://openrouter.ai/api/v1/credits` — balance check
- `curl -s https://openrouter.ai/api/v1/activity?date=YYYY-MM-DD` — per-day usage

Stay collaborative, data-driven, and explicit about every change.

Comments

Loading comments...