Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Which LLM? Deterministic model selection for agents

v1.0.18

Deterministic decision-ranking API with HTTP 402 payments and outcome credits.

0· 1.8k·2 current·3 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zapkid/which-llm.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Which LLM? Deterministic model selection for agents" (zapkid/which-llm) from ClawHub.
Skill page: https://clawhub.ai/zapkid/which-llm
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install zapkid/which-llm

ClawHub CLI

Package manager switcher

npx clawhub@latest install which-llm
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill purpose (deterministic model-selection via a Which‑LLM API) matches the SKILL.md and skill.json content: endpoints, pricing, and a payment flow. However, registry metadata provided to you earlier lists no required credentials, while both SKILL.md and skill.json declare credentials_required: true and primary_credential: WALLET_CREDENTIALS. That mismatch is an inconsistency that should be clarified.
Instruction Scope
The runtime instructions are narrowly scoped to outbound HTTPS calls to api.which-llm.com and handling HTTP 402 payment flows. The skill does not instruct reading arbitrary host files or other system resources. However, it repeatedly requires that the 'AI bot needs access to a crypto wallet' for paid calls without specifying how wallet access or signing is to be performed — this ambiguity increases risk because it could lead to insecure handling of private keys or unclear operational behavior.
Install Mechanism
This is instruction-only with no install spec, no downloads, and no code written to disk — the lowest-risk install mechanism. There are no suspicious install URLs or extracted archives.
!
Credentials
Requiring WALLET_CREDENTIALS is proportionate to a paid HTTP 402 model, but the skill declares 'sensitivity: high' and 'credentials_required: true' while the registry metadata you were shown earlier claimed no required env vars — an unexplained discrepancy. 'WALLET_CREDENTIALS' implies access to private keys or signing capability (high privilege). The skill does not explain whether the wallet must be fully exposed to the skill, if signing should happen via a host-managed signing service, or whether ephemeral signing tokens can be used. That ambiguity can lead to over-sharing highly sensitive secrets.
Persistence & Privilege
The skill is not always-on, does not request to modify other skills or system settings, and does not write to disk on install. Invocation and approval policies in skill.json indicate per-request approval is expected, which reduces risk if enforced by the host.
What to consider before installing
Before installing or enabling this skill: (1) Clarify the WALLET_CREDENTIALS requirement — ask the publisher how the wallet is used and whether signing can be performed by a host-managed signer (so private keys are never exposed to the skill). (2) Do not place private keys or raw seed phrases in environment variables accessible to third-party skills; prefer an external signing service or hardware-backed wallet. (3) Verify the payment addresses and the which-llm.com endpoints across independent sources (DNS, HTTPS, ENS TXT records) as the skill itself recommends — do not send funds until you have independently confirmed recipients. (4) Ensure your host enforces per-request user approval for outbound network calls and payments. (5) Ask the maintainer to resolve the metadata mismatch (registry metadata vs SKILL.md/skill.json) — that inconsistency should be fixed before trusting the skill.

Like a lobster shell, security has layers — review code before you run it.

latestvk9775qvvc47ef2yk483w4b6nts82n94h
1.8kdownloads
0stars
19versions
Updated 21h ago
v1.0.18
MIT-0

Which‑LLM: Outcome‑Driven Decision Optimizer

Overview

Use this skill when you need a deterministic recommendation for which LLM to use under explicit constraints such as cost and quality.

This skill is for calling the Which‑LLM API. It does not run an LLM itself. For paid requests, the AI bot needs access to a crypto wallet.

When to use it

  • Pick the cheapest model that still meets a quality target
  • Choose a fallback model if the preferred one fails
  • Keep model selection deterministic and auditable
  • Report execution results and earn credits for later requests

Quick Reference

  • API base URL: https://api.which-llm.com
  • Primary paid endpoint: POST /decision/optimize
  • Outcome endpoint: POST /decision/outcome
  • Free discovery endpoints: GET /capabilities, GET /pricing, GET /status
  • Payment asset: USDC
  • Supported chains: Base, Ethereum, Arbitrum, Optimism, Avalanche

Registry Summary Expectations

  • Primary credential: WALLET_CREDENTIALS
  • Install mechanism: instruction-only
  • Writes to disk on install: false
  • Downloads code on install: false

Prerequisites

Before using this skill, have the following ready:

  • Ability to make HTTPS requests to https://api.which-llm.com
  • Ability to send and receive JSON
  • AI bot access to a crypto wallet for paid requests

What this skill does

  • Sends requests to the Which‑LLM API
  • Uses POST /decision/optimize to get a recommended model and fallback plan
  • Uses POST /decision/outcome to report real execution results
  • Explains the request and response shapes for the public API
  • Describes how to retry requests after payment has been satisfied externally

What this skill does not do

  • It does not call an LLM directly
  • It does not execute arbitrary code from your prompt
  • It does not require unrelated files or system access beyond API usage and payment handling

Runtime Scope

  • always_on: false: the skill is not force-installed and does not run continuously
  • disable_model_invocation: false: an agent may invoke it on demand
  • The skill is limited to outbound HTTP API usage and response handling
  • For paid requests, the AI bot needs access to a crypto wallet

Payment Model

Paid calls may use HTTP 402 Payment Required.

High-level flow:

  1. Call POST /decision/optimize
  2. If the API returns 402, inspect fields such as required_amount, accepts, and payment_reference
  3. Satisfy the payment requirement outside this skill using host- or client-managed infrastructure
  4. Retry the same request with payment proof headers if available

This skill documents the API behavior and payment-related response handling. The wallet requirement is limited to paid requests.

Endpoints

GET /capabilities

Use this to discover supported constraints, decision types, and payment behavior.

GET /pricing

Use this to check current pricing and supported chains before making a paid request.

GET /status

Use this for service-health checks.

POST /decision/optimize

This is the main endpoint. Send the goal and constraints, then receive:

  • recommended_model
  • fallback_plan
  • decision metadata and explainability fields

Typical request shape:

{
  "goal": "Summarize customer feedback emails into a 5-bullet executive summary",
  "constraints": {
    "min_quality_score": 0.8,
    "max_cost_usd": 0.01
  },
  "workload": {
    "input_tokens": 1200,
    "output_tokens": 300,
    "requests": 1
  },
  "task_type": "summarize"
}

If payment is required, the API may first return 402 with fields such as:

  • required_amount
  • currency
  • accepts[].chain
  • accepts[].pay_to
  • payment_reference

Retry the request after external payment handling with:

  • X-Payment-Chain
  • X-Payment-Tx
  • X-Payer
  • X-Payment-Amount
  • X-Payment-Asset

If you have a valid credit token, also send:

  • X-Credit-Token

POST /decision/outcome

Use this after running the recommended model. Report what actually happened so the system can issue a credit token for future use.

Typical request shape:

{
  "decision_id": "d4e5f6a7-b8c9-0d1e-2f3a-4b5c6d7e8f90",
  "option_used": "openai/gpt-4o-mini",
  "actual_cost": 0.008,
  "actual_latency": 650,
  "quality_score": 0.86,
  "success": true
}

Typical response includes:

  • status
  • decision_id
  • outcome_hash
  • refund_credit.credit_token

Request Strategy For Agents

  • Call GET /capabilities or GET /pricing first if you need to discover current payment behavior
  • Use POST /decision/optimize only when you actually need model selection help
  • Reuse the returned decision data rather than repeatedly asking the same question
  • After running the chosen model, call POST /decision/outcome to earn credits
  • Use the host or client payment flow when a paid request requires wallet-backed settlement

Troubleshooting

PAYMENT_REQUIRED

The endpoint needs payment first. Read the 402 response, satisfy the payment requirement externally, then retry with payment proof headers if available.

PAYMENT_INVALID

Check:

  • exact amount sent
  • correct chain
  • correct recipient
  • confirmed transaction
  • headers match the actual transaction

NO_FEASIBLE_OPTIONS

Your cost and quality constraints are too strict for the available models. Relax the budget or quality threshold and retry.

RATE_LIMIT_EXCEEDED

Back off and retry later. Use an idempotency key for safe retries.

Comments

Loading comments...