api-quality-check
v1.0.0Check coding-model API quality, capability fit, and drift with LT-lite and B3IT-lite. Use when Codex needs to verify whether an OpenAI/OpenAI-compatible/Anth...
⭐ 0· 88·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The skill name/description (API quality checks for coding-model endpoints) matches the delivered files: a main Python script and two shell wrappers that run smoke tests, baseline creation, and drift detection against vendor endpoints. Nothing in the code or docs requests unrelated cloud credentials or system-wide privileges. Minor inconsistency: the runtime docs and examples rely on an $API_KEY environment variable or provider.json entries, but the registry metadata lists no required env vars—so the skill expects user-supplied API keys (in configs or env) even though none are declared in metadata.
Instruction Scope
SKILL.md explicitly instructs the agent to run the bundled scripts, keep outputs file-based, and run smoke → baseline → detect flows. The scripts only reference provider configs, output paths, and optional CODEX_HOME. They do not instruct reading unrelated system files or secrets beyond the provider config / API key. The agent is instructed to use file-based artifacts (JSON/HTML) and not to collect or transmit other local data.
Install Mechanism
No installer or remote download is present; this is an instruction+script bundle (no extract-from-URL installs). It requires a Python runtime and the 'requests' package at runtime, which is reasonable for a network-testing script and is not disproportionate.
Credentials
The skill expects API keys (per-provider api_key fields or example use of $API_KEY) to talk to external model endpoints. That is proportionate to the purpose. However, the registry metadata lists no required env vars while the docs repeatedly show using $API_KEY—this mismatch should be noted. Also the config allows arbitrary custom headers and extra body fields, which is necessary for some vendors but means user-supplied secrets/headers will be sent to the configured endpoints.
Persistence & Privilege
always:false and no special privileges requested. The scripts write outputs and baselines to user-specified directories only (no modifications to other skills or global agent settings). This level of presence is appropriate for a monitoring/tooling skill.
Assessment
This skill appears to do what it says: it runs headless quality/drift checks against model endpoints you configure. Before installing or running it: 1) Recognize it will send your supplied API keys and prompts to whatever base_url you provide—do not point it at untrusted endpoints. 2) The SKILL.md examples use $API_KEY but the skill metadata didn't declare required env vars; supply keys in provider.json or export $API_KEY as shown. 3) Do not commit real API keys or private provider configs to source control; use placeholder values in committed files. 4) Review provider.json/providers.json entries (base_url, headers, extra_body) to ensure headers or extra bodies don't leak sensitive tokens to unexpected domains. 5) Run the scripts in an isolated environment (or sandbox/container) if you need to limit network exposure, and ensure Python and the 'requests' package are available. If you want stronger assurance, request the author to declare required env vars (e.g., API_KEY) in the registry metadata and to document any dependencies explicitly.Like a lobster shell, security has layers — review code before you run it.
latestvk978jq4m8nkhm4rym3x2hg1wjd83gcam
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
