ModelReady

ReviewAudited by ClawScan on May 10, 2026.

Overview

ModelReady mostly does what it says, but it starts a long-running model server that binds to all network interfaces by default, which may expose the endpoint beyond your machine.

Use this only if you intend to run a vLLM server from chat. Prefer binding to localhost, for example host=127.0.0.1 or set_ip ip=127.0.0.1, unless you intentionally want LAN access. Stop the server when finished, confirm your local python3/vLLM installation is trusted, and avoid extra= flags unless you understand their effect.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Other machines on the same reachable network may be able to access the OpenAI-compatible model endpoint and consume local compute unless firewalling or host settings prevent it.

Why it was flagged

The vLLM server binds to 0.0.0.0 by default, meaning it listens on all network interfaces rather than localhost only; the SKILL.md frames the server as local and does not warn about this default exposure.

Skill content
DEFAULT_HOST="${DEFAULT_HOST:-0.0.0.0}" ... ARGS=(--model "$REPO" --host "$HOST" --port "$PORT" ...)
Recommendation

Default to 127.0.0.1, clearly document the exposure tradeoff, and require an explicit user choice before binding to 0.0.0.0.

What this means

The model server may continue consuming CPU/GPU/memory and remain reachable until the user stops it.

Why it was flagged

The skill deliberately starts vLLM as a background process and records a PID file, so it can keep running after the initial command finishes.

Skill content
nohup python3 -m vllm.entrypoints.openai.api_server \
    "${ARGS[@]}" \
    >"$LOG_FILE" 2>&1 &

echo $! > "$PID_FILE"
Recommendation

Use the stop command when finished, check status/logs if unsure, and avoid starting it on shared machines without understanding the resource and network impact.

What this means

The skill may fail at runtime or use whatever vLLM installation is already on the machine, whose version and behavior are outside the reviewed artifacts.

Why it was flagged

The script depends on python3 and the vLLM package, while the provided registry requirements list only bash and curl and there is no install spec or pinned dependency version.

Skill content
nohup python3 -m vllm.entrypoints.openai.api_server
Recommendation

Declare python3 and vLLM explicitly, provide version guidance or a pinned install path, and document that users should trust the local vLLM installation before use.

What this means

Advanced vLLM options could change networking, authentication, model loading, or runtime behavior in ways not covered by the SKILL.md instructions.

Why it was flagged

The script accepts an undocumented extra= argument and passes it through as additional vLLM server flags.

Skill content
EXTRA="${KV[extra]:-}"
if [[ -n "$EXTRA" ]]; then
    EXTRA_ARR=($EXTRA)
    ARGS+=("${EXTRA_ARR[@]}")
fi
Recommendation

Document the passthrough clearly, validate or allowlist safer options, and require explicit user intent for high-impact flags.