Model Migration

v1.0.5

Migrate OpenClaw from Claude subscription OAuth to a free or cheap model provider (OpenRouter, Gemini, Ollama). Use when the user says Claude stopped working...

0· 21·0 current·0 all-time
byBlueBirdBack ✨@bluebirdback·duplicate of @bluebirdback/model-migration
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (migrate from Claude to OpenRouter/Gemini/Ollama) matches the instructions: the skill reads OpenClaw config, identifies an Anthropics oauth profile, and provides CLI and manual steps to add alternate providers and set models. All requested actions are directly relevant to migrating providers.
Instruction Scope
The SKILL.md instructs the agent/operator to read ~/.openclaw/openclaw.json (reasonable for diagnosing OpenClaw auth), run openclaw CLI commands to onboard/set models, and to restart the gateway. It also recommends installing Ollama and obtaining external API keys. There is no instruction to read unrelated files or collect unrelated secrets. However, it explicitly tells users to run a remote install script (curl ... | sh) for Ollama — this is a potentially risky operation and should be treated carefully by the user.
Install Mechanism
The skill itself has no install spec (instruction-only), which is low-risk. But the guidance includes a command that pipes a remote script from https://ollama.com/install.sh into sh; that pattern (curl | sh) is high-risk if executed without inspection. Other installs are standard CLI onboarding commands and links to provider signup pages (openrouter.ai, aistudio.google.com) which are expected for this task.
Credentials
The skill requests no environment variables or credentials in its metadata. The instructions advise the user to obtain and supply API keys for the chosen provider — this is expected and proportional for switching model providers. There are no unsolicited requests for unrelated credentials.
Persistence & Privilege
The skill does not request always:true or any elevated persistence. It does not modify other skills' configs or request system-wide privileges in its metadata. The recommended commands modify the user's OpenClaw config only, which is appropriate for the stated purpose.
Assessment
This skill appears to do what it says: diagnose your OpenClaw config and guide you to add alternate model providers. Before following commands: 1) Backup ~/.openclaw/openclaw.json so you can restore settings if needed. 2) Prefer using the OpenClaw 'onboard' CLI options shown (they are the intended way to add auth profiles). 3) Never run curl ... | sh without inspecting the script first — the Ollama installer is convenient but executes remote code; if you want Ollama, visit ollama.com and review installation steps or download from a trusted release. 4) Be aware that using OpenRouter/Gemini sends prompts to third-party services (privacy/terms differ by provider). 5) After configuring, verify the new auth profile exists (openclaw models status) and start a new session to confirm the live runtime model. 6) If you rely on avoiding Anthropic charges, verify the gateway truly uses the new provider (the guide warns it may silently fall back to Anthropic if an auth profile is missing).

Like a lobster shell, security has layers — review code before you run it.

claudevk973n7fyqkncjrnpmc2qhqkren847gmxfreevk973n7fyqkncjrnpmc2qhqkren847gmxlatestvk97eb2jj1sheggj0jqpwdxavfh846ajhmigrationvk973n7fyqkncjrnpmc2qhqkren847gmxopenclawvk973n7fyqkncjrnpmc2qhqkren847gmxopenroutervk973n7fyqkncjrnpmc2qhqkren847gmx

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Model Migration Skill

Help the user migrate OpenClaw from Claude subscription OAuth to a free or cheap provider.

When to use this skill

  • User says Claude is blocked, not working, or requires "extra usage"
  • User mentions the April 2026 Anthropic harness policy change
  • User wants to switch to Gemini, OpenRouter, Ollama, or any non-Claude provider
  • User asks "how do I use OpenClaw without Claude?"

Step 1 — Diagnose

Read the current config and identify what's broken:

cat ~/.openclaw/openclaw.json | python3 -c "
import json,sys
d=json.load(sys.stdin)
auth = d.get('auth',{})
profiles = auth.get('profiles',{})
model = d.get('agents',{}).get('defaults',{}).get('model',{}).get('primary','not set')
print('Current model:', model)
for name, p in profiles.items():
    print(f'Auth profile: {name} provider={p.get(\"provider\")} mode={p.get(\"mode\")}')
"

If you see provider=anthropic mode=oauth — that's the blocked profile. Proceed.


Step 2 — Recommend a path

Ask the user one question: "Do you want free (no cost) or are you okay with a small per-token charge for better quality?"

Free path (recommended for most users)

→ OpenRouter free tier — no credit card, no cost

Best free models right now:

  • openrouter/free — auto-picks the best free model (zero config)
  • openrouter/meta-llama/llama-3.3-70b-instruct:free — strong general model
  • openrouter/qwen/qwen3-coder:free — best free coding model (262k context)
  • openrouter/qwen/qwen3.6-plus:free — 1M context, free

Cheap paid path ($0.10/MTok — roughly $1–5/month for typical use)

  • openrouter/google/gemini-2.5-flash-lite — Google's fast model, 1M context
  • openrouter/google/gemini-2.0-flash-001 — excellent quality/price
  • openrouter/meta-llama/llama-3.3-70b-instruct — same as free but reliable

Free + local path (privacy-first)

→ Ollama — runs on the user's machine, zero cost, fully private

Free direct (Gemini API)

→ Google AI Studio free tier — check current limits at aistudio.google.com (varies by account/region)


Step 3 — Migrate using OpenClaw CLI (preferred)

Use native OpenClaw commands — openclaw onboard creates the auth profile and config in one shot:

# 1. Add your provider (example: OpenRouter free)
openclaw onboard --non-interactive --accept-risk --auth-choice openrouter-api-key --openrouter-api-key YOUR_OPENROUTER_KEY --skip-channels --skip-skills --skip-search --skip-daemon --skip-health --skip-ui

# 2. Set a free model
openclaw models set openrouter/qwen/qwen3.6-plus:free

# 3. Restart
openclaw gateway restart

Or use the interactive auth helper:

openclaw models auth login --provider openrouter
# (follow prompts to paste your key)
openclaw models set openrouter/qwen/qwen3.6-plus:free
openclaw gateway restart

Get a free OpenRouter key at https://openrouter.ai/keys (no credit card required).


Step 4 — Manual config (if wizard doesn't work)

Option A: OpenRouter free (zero cost)

  1. Get a free API key at https://openrouter.ai/keys (no credit card)
  2. Run: openclaw onboard --non-interactive --accept-risk --auth-choice openrouter-api-key --openrouter-api-key YOUR_KEY --skip-channels --skip-skills --skip-search --skip-daemon --skip-health --skip-ui
  3. Set model: openclaw models set openrouter/meta-llama/llama-3.3-70b-instruct:free
  4. Restart: openclaw gateway restart

Option B: Gemini direct (free tier)

  1. Get key at https://aistudio.google.com → Get API key
  2. Run: openclaw onboard --non-interactive --accept-risk --auth-choice gemini-api-key --gemini-api-key YOUR_KEY --skip-channels --skip-skills --skip-search --skip-daemon --skip-health --skip-ui
  3. Set model: openclaw models set google/gemini-2.5-flash-lite
  4. Restart: openclaw gateway restart

Option C: Ollama (local or cloud free tier)

  1. Install: curl -fsSL https://ollama.com/install.sh | sh (or use cloud: ollama.com)
  2. Pull a model: ollama pull llama3.2
  3. Run: openclaw onboard --non-interactive --auth-choice ollama --accept-risk
  4. Set model: openclaw models set ollama/llama3.2
  5. Restart: openclaw gateway restart

Step 5 — Verify

First, confirm the auth profile exists for the new provider:

openclaw models status

Check that the output lists the expected auth profile (e.g. openrouter, google, ollama). If it's missing, the gateway will silently fall back to the first working provider (usually Anthropic) — the model set in config doesn't matter without a matching auth profile.

If the auth profile is missing → go back to Step 3/4 and run openclaw onboard for that provider first. Then re-run openclaw models set and restart.

Only once the auth profile is confirmed, verify the active model:

openclaw models status --plain | grep -i "primary\|model"

openclaw models status verifies the configured model/auth state, not necessarily the live runtime model already attached to an existing Telegram chat session.

To verify the live runtime model, start a new chat/session (or reset the current one) and then check /status.

If the config still looks stale after confirming the auth profile exists, restart the gateway with the portable CLI command:

openclaw gateway restart

Common errors

ErrorCauseFix
401 UnauthorizedInvalid API keyRe-enter key, check for typos
model not foundWrong model IDCheck exact ID in guide
connection refusedOllama not runningRun ollama serve
RESOURCE_EXHAUSTEDFree tier rate limitWait or switch to paid tier
Config not appliedGateway cached old configFull restart via systemd

Resources


Changelog

  • v1.0.2 (2026-04-04): Add required --accept-risk to non-interactive onboarding examples. Add --skip-health alongside --skip-daemon so the examples work on broken/stopped local gateways. Clarify that openclaw models status verifies configured state, not necessarily the live runtime model in an existing Telegram session. Switch fallback restart guidance back to portable openclaw gateway restart.
  • v1.0.1 (2026-04-04): Fix broken CLI commands — --auth-choice apiKey --token-provider openrouter was never valid. Updated to --auth-choice openrouter-api-key --openrouter-api-key KEY with --skip-* flags for non-interactive flow. Fixed Gemini auth from google-api-key to gemini-api-key. Added openclaw models auth login as alternative flow. Updated verification to use --plain flag.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…