LiteLLM
v1.0.0Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support.
⭐ 1· 1.8k·11 current·12 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's code and SKILL.md implement a generic LiteLLM client for calling many LLM providers, which is coherent with the name/description. However, the registry metadata declares no required environment variables or primary credential while the documentation and code clearly expect API keys (e.g., LITELLM_API_KEY, OPENAI_API_KEY, ANTHROPIC_API_KEY or a proxy key). This is an inconsistency (likely sloppy metadata) but not itself evidence of malice.
Instruction Scope
Runtime instructions and the included script stay within the defined purpose: build messages and call litellm.completion (optionally via a proxy). The instructions do not ask the agent to read unrelated files or system secrets. They do instruct users to set provider API keys and an optional proxy endpoint, which will cause prompts to be transmitted to external LLM providers (expected for this skill).
Install Mechanism
There is no formal install spec in the registry (instruction-only), and SKILL.md suggests 'pip install litellm' with no version or source verification. Installing an unpinned package from PyPI is common but increases risk; you should verify the package origin and integrity (official project, expected maintainer) before installing.
Credentials
Although provider API keys are appropriate for a multi-provider LLM caller, the registry metadata lists no required env vars while SKILL.md and the script reference several sensitive variables (LITELLM_API_BASE, LITELLM_API_KEY, OPENAI_API_KEY, ANTHROPIC_API_KEY). This mismatch means the skill may access or require sensitive credentials that weren't declared up front. Also note that any prompts and related data will be sent to external services tied to those keys — consider data-sensitivity implications.
Persistence & Privilege
The skill is not always-enabled, doesn't request elevated or persistent system privileges, and does not modify other skills' configs. Autonomous invocation is allowed by default but is not combined here with other high-risk properties.
What to consider before installing
This skill is coherent with its stated purpose (calling many LLMs), but take these precautions before installing:
- Verify the litellm package source/version on PyPI or the project's official docs before running 'pip install'. Prefer pinned versions or vetted packages.
- Expect to provide API keys (OpenAI, Anthropic, or a LiteLLM proxy key). The registry omitted these declarations — confirm which keys you will supply and where they are stored.
- Be aware that any prompt you send may be transmitted to third-party providers. Do not send sensitive secrets, credentials, or private data to this skill unless you trust the destination.
- Consider deploying and pointing the skill to a trusted LiteLLM proxy (as suggested) to centralize auditing, caching, and to avoid sprinkling provider keys across environments.
- If you need higher assurance, ask the publisher for: an authoritative homepage or repo, package signing or checksum, and explicit declared required env vars in the registry metadata.Like a lobster shell, security has layers — review code before you run it.
latestvk973b9aptd2eg1fbhsrf422f9d80qx11
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
