Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OnDeckLLM

v1.4.3

Localhost dashboard for managing LLM providers, model routing, and batting-order fallback chains. Auto-discovers providers from OpenClaw config or works stan...

0· 64·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (local dashboard, provider discovery, model routing) match the instructions and included helper (status.js). The SKILL.md explicitly tells the agent to install the external npm package (ondeckllm) which is where the dashboard functionality would live. Requested artifacts (no env vars, references to ~/.openclaw and ~/.ondeckllm) are consistent with a local LLM provider manager.
Instruction Scope
Runtime instructions are narrowly scoped: install the ondeckllm npm package, run the daemon, check status with node scripts/status.js, and open http://localhost:3900. The SKILL.md does reference reading and syncing ~/.openclaw/openclaw.json and writing data to ~/.ondeckllm/, which is expected for a config/dashboard tool. There are no vague instructions to exfiltrate data or read arbitrary system files.
Install Mechanism
The skill is instruction-only but directs users to run `npm install -g ondeckllm`. Installing a global npm package is a normal way to provide a local dashboard, but npm packages run arbitrary code at install/run time — review the package before installing (audit, verify publisher, inspect package contents). The registry metadata does not embed an install artifact; the SKILL.md's install command is the sole install guidance.
Credentials
No environment variables or external credentials are requested by the skill bundle. The SKILL.md mentions using PORT and configuring a remote Ollama URL via the UI; both are reasonable for this tool. Note: config files (~/ .openclaw/openclaw.json and ~/.ondeckllm/) may contain provider API keys or sensitive entries — the dashboard's ability to read/write those files is consistent with its purpose but is a sensitive operation the user should consent to.
Persistence & Privilege
The skill is not set to always:true and is user-invocable. It does not request to modify other skills or system-wide agent settings. It stores data under ~/.ondeckllm/ per SKILL.md which is appropriate for a local dashboard.
Assessment
This skill appears coherent for a local LLM dashboard. Before installing/using it: (1) review the ondeckllm npm package (publisher, package contents, recent activity) because global npm packages can execute code; (2) inspect the contents of ~/.openclaw/openclaw.json and be aware the dashboard will read and can write it (it may contain API keys); (3) check ~/.ondeckllm/ after first run to see what is logged (usage.jsonl may contain metadata about sessions); (4) consider running it in an isolated environment or container if you are unsure about trusting the npm package. If you want, I can help inspect the ondeckllm npm package or the repository linked in the SKILL.md before you install.
scripts/status.js:15
Shell command execution detected (child_process).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk974yf3qz8nn525yb76cazh73s83fpvf

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

OnDeckLLM — AI Model Lineup Manager

Prerequisites

npm install -g ondeckllm

Verify: ondeckllm --help or check install with npm list -g ondeckllm.

What It Does

OnDeckLLM is a localhost web dashboard that:

  • Auto-discovers LLM providers from OpenClaw config (~/.openclaw/openclaw.json)
  • Manages a batting-order priority list for model routing (primary + fallbacks)
  • Tests provider health and latency
  • Syncs model lineup back to OpenClaw config with one click
  • Tracks session costs (JSONL usage log + Chart.js)
  • Supports Anthropic, OpenAI, Google AI, Groq, xAI/Grok, Ollama (local + remote), Mistral, DeepSeek, Together, OpenRouter

Starting the Dashboard

# Default port 3900
ondeckllm

# Custom port
PORT=3901 ondeckllm

The dashboard runs at http://localhost:3900 (or custom port).

As a Background Service

Use the helper script to check status or start OnDeckLLM:

node scripts/status.js

Output: JSON with running (bool), port, url, and pid if active.

Agent Workflow

Check if OnDeckLLM is running

node scripts/status.js

Open the dashboard for the user

Direct them to http://localhost:3900 (or the configured port/URL).

Provider management

OnDeckLLM reads provider config from ~/.openclaw/openclaw.json automatically. Changes made in the dashboard sync back to OpenClaw config. No separate API or CLI commands needed — it's a web UI tool.

Configuration

OnDeckLLM stores its data in ~/.ondeckllm/:

  • config.json — provider settings, port, Ollama URL
  • usage.jsonl — cost tracking log
  • profiles/ — saved batting-order profiles

Remote Ollama

To connect to a remote Ollama instance, configure in the dashboard UI: Settings → Ollama → Remote URL (e.g., http://192.168.55.80:11434)

Links

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…