Install
openclaw skills install aisa-provider-slot3Configure AIsa as an OpenAI-compatible provider endpoint for OpenClaw and related runtimes. Use this skill when the user wants to set `AISA_API_KEY`, point a...
openclaw skills install aisa-provider-slot3Release note: This package is published for this runtime. References to OpenClaw below describe the original source workflow, a companion runtime, or compatibility guidance unless the skill is explicitly about OpenClaw itself.
This skill is a setup guide for using AISA_API_KEY with the AIsa gateway at https://api.aisa.one/v1. It documents provider configuration, example model IDs, and verification steps for OpenClaw-compatible runtimes.
This package ships guidance and reference material only. It does not include local onboarding scripts or direct model-runtime code inside the skill bundle.
⚠️ All pricing listed below is for reference. Real-time pricing is subject to change — always check https://marketplace.aisa.one/pricing for the latest rates.
AISA_API_KEY to your local runtime, which then sends model requests to https://api.aisa.one/v1.export AISA_API_KEY="your-key-here"
If your runtime supports provider auto-discovery, AISA_API_KEY may be enough. Otherwise use the explicit config examples below.
openclaw onboard --auth-choice aisa-api-key
~/.openclaw/openclaw.json{
"models": {
"providers": {
"aisa": {
"baseUrl": "https://api.aisa.one/v1",
"apiKey": "${AISA_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "aisa/qwen3-max",
"name": "Qwen3 Max",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 256000,
"maxTokens": 16384,
"supportsDeveloperRole": false,
"cost": {
"input": 1.20,
"output": 4.80,
"cacheRead": 0,
"cacheWrite": 0
}
},
{
"id": "aisa/qwen-plus-2025-12-01",
"name": "Qwen Plus",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 256000,
"maxTokens": 16384,
"supportsDeveloperRole": false,
"cost": {
"input": 0.30,
"output": 0.90,
"cacheRead": 0,
"cacheWrite": 0
}
},
{
"id": "aisa/qwen-mt-flash",
"name": "Qwen MT Flash",
"reasoning": true,
"input": ["text"],
"contextWindow": 256000,
"maxTokens": 8192,
"supportsDeveloperRole": false,
"cost": {
"input": 0.05,
"output": 0.30,
"cacheRead": 0,
"cacheWrite": 0
}
},
{
"id": "aisa/deepseek-v3.1",
"name": "DeepSeek V3.1",
"reasoning": true,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192,
"supportsDeveloperRole": false,
"cost": {
"input": 0.27,
"output": 1.10,
"cacheRead": 0.07,
"cacheWrite": 0
}
},
{
"id": "aisa/kimi-k2.5",
"name": "Kimi K2.5",
"reasoning": true,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192,
"supportsDeveloperRole": false,
"cost": {
"input": 0.60,
"output": 2.40,
"cacheRead": 0,
"cacheWrite": 0
}
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "aisa/qwen3-max"
}
}
}
}
| Model | Model ID | Best For | Context | Reasoning | Verified |
|---|---|---|---|---|---|
| Qwen3 Max | aisa/qwen3-max | Complex reasoning, flagship tasks | 256K | ✅ | ✅ |
| Qwen Plus | aisa/qwen-plus-2025-12-01 | Main production model | 256K | ✅ | ✅ |
| Qwen MT Flash | aisa/qwen-mt-flash | High-frequency, lightweight tasks | 256K | ✅ | ✅ |
| DeepSeek V3.1 | aisa/deepseek-v3.1 | Cost-effective reasoning | 128K | ✅ | ✅ |
| Kimi K2.5 | aisa/kimi-k2.5 | Routed reasoning model, availability varies by catalog | 128K | ✅ | ✅ |
If AIsa currently exposes aisa/kimi-k2.5, treat it like any other routed model:
GET /v1/modelsOne practical caveat observed in prior testing:
1.0If a request fails because of temperature handling, retry with the model default instead of assuming the model ID is unavailable.
Users can add any model supported by AIsa to their config. The full catalog includes 49+ models:
Qwen family (8 models):
qwen3-max, qwen3-max-2026-01-23, qwen-plus-2025-12-01qwen-mt-flash, qwen-mt-liteqwen-vl-max, qwen3-vl-flash, qwen3-vl-plus (vision models)DeepSeek (4 models):
deepseek-v3.1, deepseek-v3, deepseek-v3-0324, deepseek-r1Kimi / Moonshot (2 models):
kimi-k2.5, kimi-k2-thinkingAlso available: Claude series (10), GPT series (9), Gemini series (5), Grok series (2), and more.
List all available models:
curl https://api.aisa.one/v1/models -H "Authorization: Bearer $AISA_API_KEY"
AIsa uses versioned model IDs for some models. If you encounter a 503 - No available channels error, the model ID may need updating.
Known model ID mappings:
| Common Name | Correct AIsa Model ID | ❌ Does NOT work |
|---|---|---|
| Qwen Plus | qwen-plus-2025-12-01 | qwen3-plus, qwen-plus, qwen-plus-latest |
| Qwen Flash | qwen-mt-flash | qwen3-flash, qwen-turbo, qwen-turbo-latest |
| Qwen Max | qwen3-max | (works as-is) |
| DeepSeek V3.1 | deepseek-v3.1 | (works as-is) |
| Kimi K2.5 | kimi-k2.5 | (works as-is) |
To check the latest available model IDs:
curl https://api.aisa.one/v1/models -H "Authorization: Bearer $AISA_API_KEY"
In chat (TUI):
/model aisa/qwen3-max
/model aisa/deepseek-v3.1
/model aisa/kimi-k2.5
Via CLI:
openclaw models set aisa/qwen3-max
The model ID may be incorrect or outdated. Check the Model ID Versioning section above for correct IDs. Common fixes:
qwen3-plus → use qwen-plus-2025-12-01qwen3-flash → use qwen-mt-flashEnsure the model ID uses the aisa/ prefix in OpenClaw config:
✅ aisa/qwen3-max
❌ qwen3-max
Kimi K2.5 only accepts temperature=1.0. If your config sets a different temperature, add a model-specific override or let OpenClaw use the default.
In rare cases Kimi K2.5 may return empty content while consuming output tokens. Retry the request — this is typically transient.
echo $AISA_API_KEYopenclaw config get auth.profilesopenclaw onboard --auth-choice aisa-api-keyAIsa uses the OpenAI-compatible API (openai-completions). Ensure your config has:
"api": "openai-completions"
Rate limits, quotas, and daily caps can change by provider agreement or account tier. Check the current AIsa documentation or account console instead of assuming a fixed policy.
AISA_API_KEY or use the onboarding wizardhttps://api.aisa.one/v1)supportsDeveloperRole is set to false for Qwen modelstemperature=1.0 — other values cause API errors