Ai Chat
Access 50+ LLM models through a unified OpenAI-compatible API via AceDataCloud. Use when you need chat completions from GPT, Claude, Gemini, DeepSeek, Grok,...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 57 · 0 current installs · 0 all-time installs
by@germey
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description and the runtime instructions align: this is an OpenAI-compatible gateway to many third-party models (AceDataCloud API). However, the package metadata does not declare the ACEDATACLOUD_API_TOKEN credential that the SKILL.md explicitly states is required. Also there is no homepage or source URL listed, which reduces provenance and auditability.
Instruction Scope
SKILL.md only shows examples of calling https://api.acedata.cloud/v1 and using an API token. It does not instruct the agent to read unrelated files, environment variables, or system paths, nor to transmit data to unexpected endpoints beyond the documented AceDataCloud API.
Install Mechanism
Instruction-only skill with no install spec and no code files — minimal surface area and nothing is written to disk by an installer.
Credentials
The runtime instructions require a single bearer credential (ACEDATACLOUD_API_TOKEN), which is proportionate for an API gateway. The concern is that the registry metadata omitted this required env var (Required env vars: none), causing an inconsistency: the platform may not surface that a secret is needed and users might not realize they must provide a token. Also the token's scope, where to obtain it, and billing implications are not documented in the skill metadata.
Persistence & Privilege
The skill is not force-included (always: false) and uses normal autonomous invocation defaults. It does not request persistent system-level privileges or modify other skills/config. Nothing unusual here.
What to consider before installing
This skill looks functionally consistent with being an OpenAI-compatible proxy to AceDataCloud, but the SKILL.md requires ACEDATACLOUD_API_TOKEN while the registry metadata does not declare it — ask the publisher to correct the metadata. Before installing: verify the vendor (AceDataCloud) and its domain (api.acedata.cloud), confirm how to obtain and scope the API token, understand billing and data retention policies for requests sent through their gateway, and only provide a token with minimum necessary scope. Because the source and homepage are missing, treat the skill as lower provenance: test in a sandboxed environment and avoid supplying high-privilege credentials until you confirm the vendor and metadata are correct.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
AI Chat — Unified LLM Gateway
Access 50+ language models through a single OpenAI-compatible endpoint via AceDataCloud.
Authentication
export ACEDATACLOUD_API_TOKEN="your-token-here"
Quick Start
curl -X POST https://api.acedata.cloud/v1/chat/completions \
-H "Authorization: Bearer $ACEDATACLOUD_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "Hello!"}]}'
OpenAI SDK Drop-in
from openai import OpenAI
client = OpenAI(
api_key="your-token-here",
base_url="https://api.acedata.cloud/v1"
)
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
print(response.choices[0].message.content)
Available Models
OpenAI GPT
| Model | Type | Best For |
|---|---|---|
gpt-4.1 | Latest | General-purpose, high quality |
gpt-4.1-mini | Small | Fast, cost-effective |
gpt-4.1-nano | Tiny | Ultra-fast, lowest cost |
gpt-4o | Multimodal | Vision + text |
gpt-4o-mini | Small multimodal | Fast vision tasks |
o1 | Reasoning | Complex reasoning tasks |
o1-mini | Small reasoning | Quick reasoning |
o1-pro | Pro reasoning | Advanced reasoning |
gpt-5 | Latest gen | Next-gen intelligence |
gpt-5-mini | Mini gen 5 | Fast next-gen |
Anthropic Claude
| Model | Type | Best For |
|---|---|---|
claude-opus-4-6 | Latest Opus | Highest capability |
claude-sonnet-4-6 | Latest Sonnet | Balanced quality/speed |
claude-opus-4-5-20251101 | Opus 4.5 | Premium tasks |
claude-sonnet-4-5-20250929 | Sonnet 4.5 | High-quality balance |
claude-sonnet-4-20250514 | Sonnet 4 | Reliable general-purpose |
claude-haiku-4-5-20251001 | Haiku 4.5 | Fast, efficient |
claude-3-5-sonnet-20241022 | Legacy 3.5 | Proven track record |
claude-3-opus-20240229 | Legacy Opus | Maximum quality (legacy) |
Google Gemini
| Model | Best For |
|---|---|
gemini-1.5-pro | Long context, complex tasks |
gemini-1.5-flash | Fast, efficient |
DeepSeek
| Model | Best For |
|---|---|
deepseek-r1 | Deep reasoning |
deepseek-r1-0528 | Latest reasoning |
deepseek-v3 | General-purpose |
deepseek-v3-250324 | Latest general |
xAI Grok
| Model | Best For |
|---|---|
grok-4 | Latest, highest capability |
grok-3 | General-purpose |
grok-3-fast | Speed-optimized |
grok-3-mini | Compact, efficient |
Features
Streaming
POST /v1/chat/completions
{
"model": "claude-sonnet-4-20250514",
"messages": [{"role": "user", "content": "Write a story"}],
"stream": true
}
Function Calling
POST /v1/chat/completions
{
"model": "gpt-4.1",
"messages": [{"role": "user", "content": "What's the weather in Tokyo?"}],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"parameters": {"type": "object", "properties": {"location": {"type": "string"}}}
}
}
]
}
Vision
POST /v1/chat/completions
{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": {"url": "https://example.com/photo.jpg"}}
]
}
]
}
Parameters
| Parameter | Type | Description |
|---|---|---|
model | string | Model name (see tables above) |
messages | array | Array of {role, content} objects |
temperature | 0–2 | Randomness (default: 1) |
top_p | 0–1 | Nucleus sampling |
max_tokens | integer | Maximum output tokens |
stream | boolean | Enable SSE streaming |
tools | array | Function calling definitions |
tool_choice | string/object | Tool selection strategy |
Response
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"model": "claude-sonnet-4-20250514",
"choices": [
{
"index": 0,
"message": {"role": "assistant", "content": "Hello!"},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 5,
"total_tokens": 15
}
}
Gotchas
- 100% OpenAI-compatible — use the standard OpenAI SDK with
base_url="https://api.acedata.cloud/v1" - Billing is token-based with per-model pricing (more expensive models cost more per token)
- Vision is supported on multimodal models (
gpt-4o,gpt-4o-mini,grok-2-vision-*) - Function calling works on most modern models (GPT-4+, Claude 3+)
- Streaming returns
chat.completion.chunkobjects via SSE finish_reasonvalues:"stop"(complete),"length"(max tokens),"tool_calls"(function call),"content_filter"(filtered)
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
