Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Local Model Optimizer

v1.0.0

Auto-detect hardware (GPU VRAM, system RAM, CPU), recommend optimal local models from Ollama registry, configure Ollama with tuned parameters, and set up hyb...

0· 23·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the code and SKILL.md: the script detects GPU/RAM/CPU, recommends Ollama-compatible models, can pull models and configure OpenClaw routing. The requested capabilities are consistent with the stated purpose.
Instruction Scope
Runtime instructions ask the agent to run the included Python script which executes system utilities (nvidia-smi/rocm-smi/sysctl), may read OpenClaw logs/config, install Ollama, pull models, and write ~/.openclaw/local-model-config.json and update ~/.openclaw/openclaw.json. Reading OpenClaw logs for cost analysis and writing the OpenClaw config are within the claimed scope, but these are sensitive operations (global config/log access) that should be expected and reviewed by the user before running.
!
Install Mechanism
The skill itself has no install spec, but the script will install Ollama on Linux by executing a remote shell script via 'curl -fsSL https://ollama.com/install.sh | sh' (and uses brew on macOS). Executing a remote installer via a pipe to sh is higher-risk even when the URL is an official domain; users should inspect the installer before execution.
!
Credentials
The skill declares no required env vars or credentials, but it reads and writes user-local OpenClaw files (~/.openclaw/openclaw.json and logs) and may examine system state and driver details. Those accesses can expose sensitive configuration or credentials stored in the agent config. No explicit credential handling is declared, so this implicit access is disproportionate unless the user expects the tool to modify their OpenClaw global config.
Persistence & Privilege
The skill modifies/writes a global OpenClaw config file (~/.openclaw/openclaw.json and local-model-config.json). This is expected for configuring routing/providers, but it does change global agent settings rather than only creating a per-skill artifact. It does not set always:true and does not autonomously enable itself beyond normal skill invocation rules.
What to consider before installing
What to check before installing/using this skill: - Backup ~/.openclaw/openclaw.json (and any OpenClaw logs) before running — the script will write global config files. - Inspect the included script (scripts/local-model-optimizer.py) yourself; it will call nvidia-smi/rocm-smi/sysctl, run 'ollama pull', and may modify OpenClaw settings. - Do not run the automatic 'auto' flow on a production machine without review. Start with 'detect' and 'recommend' to see what the tool finds and suggests. - The script may install Ollama via 'curl https://ollama.com/install.sh | sh' on Linux — review that installer script on ollama.com before allowing execution, or install Ollama manually. - Be aware model pulls will download potentially large files and use network/disk; check model licenses and disk space. - If you store cloud provider credentials or other secrets in OpenClaw config or logs, verify the skill will not overwrite or transmit them (script does not declare external exfil endpoints, but it reads/writes the OpenClaw config). Consider running in a sandbox or VM first. - If uncertain, ask the skill author for an explicit list of file edits and a dry-run mode that only reports changes without applying them.

Like a lobster shell, security has layers — review code before you run it.

latestvk9737k6zhfb89vax5a99pf1zb9849ddc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments