LiteLLM
Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support.
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 1 · 1.4k · 10 current installs · 11 all-time installs
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's code and SKILL.md implement a generic LiteLLM client for calling many LLM providers, which is coherent with the name/description. However, the registry metadata declares no required environment variables or primary credential while the documentation and code clearly expect API keys (e.g., LITELLM_API_KEY, OPENAI_API_KEY, ANTHROPIC_API_KEY or a proxy key). This is an inconsistency (likely sloppy metadata) but not itself evidence of malice.
Instruction Scope
Runtime instructions and the included script stay within the defined purpose: build messages and call litellm.completion (optionally via a proxy). The instructions do not ask the agent to read unrelated files or system secrets. They do instruct users to set provider API keys and an optional proxy endpoint, which will cause prompts to be transmitted to external LLM providers (expected for this skill).
Install Mechanism
There is no formal install spec in the registry (instruction-only), and SKILL.md suggests 'pip install litellm' with no version or source verification. Installing an unpinned package from PyPI is common but increases risk; you should verify the package origin and integrity (official project, expected maintainer) before installing.
Credentials
Although provider API keys are appropriate for a multi-provider LLM caller, the registry metadata lists no required env vars while SKILL.md and the script reference several sensitive variables (LITELLM_API_BASE, LITELLM_API_KEY, OPENAI_API_KEY, ANTHROPIC_API_KEY). This mismatch means the skill may access or require sensitive credentials that weren't declared up front. Also note that any prompts and related data will be sent to external services tied to those keys — consider data-sensitivity implications.
Persistence & Privilege
The skill is not always-enabled, doesn't request elevated or persistent system privileges, and does not modify other skills' configs. Autonomous invocation is allowed by default but is not combined here with other high-risk properties.
What to consider before installing
This skill is coherent with its stated purpose (calling many LLMs), but take these precautions before installing:
- Verify the litellm package source/version on PyPI or the project's official docs before running 'pip install'. Prefer pinned versions or vetted packages.
- Expect to provide API keys (OpenAI, Anthropic, or a LiteLLM proxy key). The registry omitted these declarations — confirm which keys you will supply and where they are stored.
- Be aware that any prompt you send may be transmitted to third-party providers. Do not send sensitive secrets, credentials, or private data to this skill unless you trust the destination.
- Consider deploying and pointing the skill to a trusted LiteLLM proxy (as suggested) to centralize auditing, caching, and to avoid sprinkling provider keys across environments.
- If you need higher assurance, ask the publisher for: an authoritative homepage or repo, package signing or checksum, and explicit declared required env vars in the registry metadata.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
LiteLLM - Multi-Model LLM Calls
Use LiteLLM when you need to call LLMs beyond your primary model.
When to Use
- Model comparison: Get outputs from multiple models and compare
- Specialized routing: Use code-optimized models for code, writing models for prose
- Cost optimization: Route simple queries to cheaper models
- Fallback access: Access models your runtime doesn't support
Quick Start
import litellm
# Call any model with unified API
response = litellm.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain this code"}]
)
print(response.choices[0].message.content)
Common Patterns
Compare Multiple Models
import litellm
prompt = [{"role": "user", "content": "What's the best approach to X?"}]
models = ["gpt-4o", "claude-sonnet-4-20250514", "gemini/gemini-1.5-pro"]
for model in models:
resp = litellm.completion(model=model, messages=prompt)
print(f"{model}: {resp.choices[0].message.content[:200]}...")
Route by Task Type
import litellm
def smart_call(task_type: str, prompt: str) -> str:
model_map = {
"code": "gpt-4o", # Strong at code
"writing": "claude-sonnet-4-20250514", # Strong at prose
"simple": "gpt-4o-mini", # Cheap for simple tasks
"reasoning": "o1-preview", # Deep reasoning
}
model = model_map.get(task_type, "gpt-4o")
resp = litellm.completion(
model=model,
messages=[{"role": "user", "content": prompt}]
)
return resp.choices[0].message.content
Use LiteLLM Proxy (Recommended)
If a LiteLLM proxy is available, point to it for caching, rate limiting, and observability:
import litellm
litellm.api_base = "https://your-litellm-proxy.com"
litellm.api_key = "sk-your-key"
response = litellm.completion(
model="gpt-4o", # Proxy routes to configured provider
messages=[{"role": "user", "content": "Hello"}]
)
Environment Setup
Ensure litellm is installed and API keys are set:
pip install litellm
# Set provider keys (or configure in proxy)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-..."
Model Reference
Common model identifiers:
- OpenAI:
gpt-4o,gpt-4o-mini,o1-preview,o1-mini - Anthropic:
claude-sonnet-4-20250514,claude-opus-4-20250514 - Google:
gemini/gemini-1.5-pro,gemini/gemini-1.5-flash - Mistral:
mistral/mistral-large-latest
Full list: https://docs.litellm.ai/docs/providers
Files
2 totalSelect a file
Select a file to preview.
Comments
Loading comments…
