Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Local Model Optimizer

v1.0.0

Auto-detect hardware (GPU VRAM, system RAM, CPU), recommend optimal local models from Ollama registry, configure Ollama with tuned parameters, and set up hyb...

0· 85·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for stevojarvisai-star/local-model-optimizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Local Model Optimizer" (stevojarvisai-star/local-model-optimizer) from ClawHub.
Skill page: https://clawhub.ai/stevojarvisai-star/local-model-optimizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install local-model-optimizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install local-model-optimizer
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the code and SKILL.md: the script detects GPU/RAM/CPU, recommends Ollama-compatible models, can pull models and configure OpenClaw routing. The requested capabilities are consistent with the stated purpose.
Instruction Scope
Runtime instructions ask the agent to run the included Python script which executes system utilities (nvidia-smi/rocm-smi/sysctl), may read OpenClaw logs/config, install Ollama, pull models, and write ~/.openclaw/local-model-config.json and update ~/.openclaw/openclaw.json. Reading OpenClaw logs for cost analysis and writing the OpenClaw config are within the claimed scope, but these are sensitive operations (global config/log access) that should be expected and reviewed by the user before running.
!
Install Mechanism
The skill itself has no install spec, but the script will install Ollama on Linux by executing a remote shell script via 'curl -fsSL https://ollama.com/install.sh | sh' (and uses brew on macOS). Executing a remote installer via a pipe to sh is higher-risk even when the URL is an official domain; users should inspect the installer before execution.
!
Credentials
The skill declares no required env vars or credentials, but it reads and writes user-local OpenClaw files (~/.openclaw/openclaw.json and logs) and may examine system state and driver details. Those accesses can expose sensitive configuration or credentials stored in the agent config. No explicit credential handling is declared, so this implicit access is disproportionate unless the user expects the tool to modify their OpenClaw global config.
Persistence & Privilege
The skill modifies/writes a global OpenClaw config file (~/.openclaw/openclaw.json and local-model-config.json). This is expected for configuring routing/providers, but it does change global agent settings rather than only creating a per-skill artifact. It does not set always:true and does not autonomously enable itself beyond normal skill invocation rules.
What to consider before installing
What to check before installing/using this skill: - Backup ~/.openclaw/openclaw.json (and any OpenClaw logs) before running — the script will write global config files. - Inspect the included script (scripts/local-model-optimizer.py) yourself; it will call nvidia-smi/rocm-smi/sysctl, run 'ollama pull', and may modify OpenClaw settings. - Do not run the automatic 'auto' flow on a production machine without review. Start with 'detect' and 'recommend' to see what the tool finds and suggests. - The script may install Ollama via 'curl https://ollama.com/install.sh | sh' on Linux — review that installer script on ollama.com before allowing execution, or install Ollama manually. - Be aware model pulls will download potentially large files and use network/disk; check model licenses and disk space. - If you store cloud provider credentials or other secrets in OpenClaw config or logs, verify the skill will not overwrite or transmit them (script does not declare external exfil endpoints, but it reads/writes the OpenClaw config). Consider running in a sandbox or VM first. - If uncertain, ask the skill author for an explicit list of file edits and a dry-run mode that only reports changes without applying them.

Like a lobster shell, security has layers — review code before you run it.

latestvk9737k6zhfb89vax5a99pf1zb9849ddc
85downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Local Model Optimizer

Auto-detect hardware → recommend models → configure Ollama → set up hybrid cloud/local routing.

Quick Start

# Full auto-setup: detect hardware, install Ollama, recommend + pull model, configure routing
python3 scripts/local-model-optimizer.py auto

# Hardware detection only
python3 scripts/local-model-optimizer.py detect

# Recommend models for your hardware (no install)
python3 scripts/local-model-optimizer.py recommend

# Set up hybrid routing (cloud for complex tasks, local for simple ones)
python3 scripts/local-model-optimizer.py routing

# Cost comparison: local vs cloud
python3 scripts/local-model-optimizer.py cost

Commands

auto — Full Automated Setup

  1. Detects GPU (NVIDIA/AMD/Apple Silicon), VRAM, RAM, CPU cores
  2. Queries Ollama model registry for compatible models
  3. Recommends top 3 models ranked by benchmark/size ratio
  4. Installs Ollama if not present
  5. Pulls recommended model
  6. Configures OpenClaw provider entry
  7. Sets up hybrid routing rules
  8. Runs verification test

detect — Hardware Detection

Reports:

  • GPU model, VRAM, driver version (NVIDIA/AMD/Apple)
  • System RAM (total/available)
  • CPU model, core count, architecture
  • Estimated model size capacity
  • Compatibility tier: Tiny (≤4GB) / Small (4-8GB) / Medium (8-16GB) / Large (16-32GB) / XL (32GB+)

recommend — Model Recommendations

Based on hardware tier, recommends from:

TierVRAMModels
Tiny≤4GBGemma 4 E2B, Phi-3.5 Mini, Qwen2.5-3B
Small4-8GBGemma 4 E4B, Llama 3.1 8B, Mistral 7B
Medium8-16GBGemma 4 12B, Llama 3.1 8B Q8, CodeGemma
Large16-32GBGemma 4 27B, Llama 3.1 70B Q4, Mixtral 8x7B
XL32GB+Gemma 4 27B Q8, Llama 3.1 70B Q8, DeepSeek V2

See references/model-matrix.md for full benchmark comparisons.

routing — Hybrid Cloud/Local Routing

Configures OpenClaw to route requests intelligently:

  • Local: Simple Q&A, summarization, code completion, memory operations
  • Cloud: Complex reasoning, multi-step planning, code generation, creative writing

Options:

  • --strategy cost — minimize API spend (prefer local)
  • --strategy quality — maximize output quality (prefer cloud)
  • --strategy balanced — default, smart routing based on task complexity
  • --cloud-provider <name> — which cloud provider for fallback (default: anthropic)

cost — Cost Analysis

Calculates monthly savings based on:

  • Current API usage pattern (reads from OpenClaw logs if available)
  • Estimated electricity cost for local inference
  • Token throughput comparison
  • Break-even analysis for hardware investment

Configuration

The optimizer writes to ~/.openclaw/local-model-config.json:

{
  "hardware": { "gpu": "...", "vram_gb": 16, "ram_gb": 32, "tier": "Large" },
  "model": { "name": "gemma4:27b", "quantization": "Q4_K_M", "size_gb": 15.2 },
  "routing": { "strategy": "balanced", "local_tasks": [...], "cloud_tasks": [...] },
  "performance": { "tokens_per_sec": 42, "first_token_ms": 180 }
}

Comments

Loading comments...