Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Provider Probe
v1.0.0Probe and verify whether an OpenAI-compatible baseURL is a real single-model endpoint or a multi-model aggregation pool. Use when auditing model providers, c...
⭐ 0· 14·0 current·0 all-time
byAndy Ren@andyrenxu7255
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The name/description (probing OpenAI-compatible baseURLs for aggregation vs single-model routes) aligns with the included probe script and checklist. However, SKILL.md explicitly tells the agent to "Read provider config or ask for baseURL + apiKey", yet the registry metadata declares no required config paths or environment credentials — a mismatch between claimed needs and declared requirements.
Instruction Scope
SKILL.md and the bundled script instruct the agent to read provider configuration (examples show /root/.openclaw/openclaw.json) or accept baseURL+apiKey input, then make HTTP calls to /models, /responses and /chat/completions. Those instructions permit reading local JSON config files and transmitting API keys to arbitrary endpoints supplied to the tool. The skill does not declare or restrict which config paths may be accessed, increasing the chance the agent could read and transmit unrelated sensitive configuration if used carelessly.
Install Mechanism
Instruction-only skill with a bundled Python script; no install spec, no network download/install step. Low risk from installation mechanism itself.
Credentials
The code expects API keys either via a CLI --api-key argument or inside a JSON config (cfg['models']['providers'][name]['apiKey']). Yet the skill declares no required env vars or config paths and lists no primary credential. That under-declaration is inconsistent and important: in practice this skill needs sensitive API keys to operate, and if the agent follows the instruction to "read provider config" it may access and transmit those keys to external baseURLs.
Persistence & Privilege
always is false and the skill does not request persistent presence or system-level modifications. The normal default of allowing autonomous invocation applies; this alone is not a flag, but combined with the instruction to read configs and handle API keys it increases potential blast radius if the agent is allowed to run the skill autonomously.
What to consider before installing
This skill is plausibly what it says (a probe for OpenAI-compatible endpoints) but it instructs the agent to read provider configuration files and to use API keys while not declaring any required config paths or credentials. Before installing or running: (1) inspect the bundled script locally (it is included) and run it yourself in a controlled environment rather than giving the agent broad permission to run it autonomously; (2) do not let the agent read system-wide config files you care about — pass only a minimal, sanitized config or explicit baseURL+apiKey for the provider you want tested; (3) be aware the script will send any API key you supply to whatever base_url you target (that is the intended behavior but is also how keys could be leaked); (4) prefer manual invocation or run inside an isolated container/VM and avoid giving the agent access to your main OpenClaw or cloud provider configs. If the publisher can clarify which config path(s) are needed and declare them (or require explicit user confirmation before reading any files), the inconsistency would be addressed.Like a lobster shell, security has layers — review code before you run it.
latestvk97fdyg547etaj0warh9e7h3tx84k97j
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
