One API key for 70+ AI models. Route to GPT, Claude, Gemini, Qwen, Deepseek, Grok and more
PassAudited by ClawScan on May 1, 2026.
Overview
This appears to be a straightforward LLM gateway skill, but using it sends prompts or images through AIsa and uses an API key that may incur model charges.
Before installing, confirm that you trust AIsa/openclaw.ai for the prompts and images you may send, keep the API key protected, and monitor usage or billing when using comparison or fallback features.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Anyone using the skill with this key may spend API credits or access models allowed by that key.
The client reads the AIsa API key from the environment and sends it as an authorization bearer token. This is expected for the gateway, but it is still account-level API authority.
self.api_key = api_key or os.environ.get("AISA_API_KEY") ... "Authorization": f"Bearer {self.api_key}"Use a revocable or scoped key if available, avoid hardcoding it, and monitor billing or usage limits.
Sensitive prompts, documents pasted into prompts, image URLs, or base64 images could be sent to AIsa and routed model providers.
Chat payloads are posted to an external gateway. This is central to the skill's purpose, but prompts, system messages, and vision inputs may be processed outside the user's environment.
BASE_URL = "https://api.aisa.one/v1" ... payload = {"model": model, "messages": messages, "stream": stream}Do not send secrets or private data unless you trust the gateway and have reviewed its privacy, retention, and provider-routing policies.
Model comparison or fallback use can increase token costs and send the same content to multiple model routes.
The comparison feature sends the same user message to each selected model. This is purpose-aligned, but it can multiply API calls and distribute the same content more broadly.
for model in models: ... self.chat(model=model, messages=[{"role": "user", "content": message}], **kwargs)Use explicit model lists, set practical cost limits, and avoid multi-model comparison for sensitive prompts.
