Back to skill
Skillv2.0.0
ClawScan security
Hardware LLM Optimizer v2 (llmfit) · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 17, 2026, 9:57 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's functionality (hardware detection + model recommendations) matches its description, but the documentation encourages installing 'llmfit' via a curl|sh from an untrusted domain and asserts an installed binary path without an install spec — this is disproportionate and risky.
- Guidance
- This skill's detection code (detect.py) appears benign and aligned with the description: it inspects local hardware via psutil and nvidia-smi and prints recommendations. However, the SKILL.md asks you to install 'llmfit' by piping a script from https://llmfit.axjns.dev/install.sh directly into sh — that pattern is high-risk because it runs arbitrary code from an unvetted host. Before installing or running anything: (1) do NOT run curl ... | sh without inspecting the script; fetch the URL and review its contents first; (2) prefer installing llmfit from a known official source (GitHub releases, vendor homepage) or verify the domain and script integrity; (3) run detect.py locally in a restricted environment if you only want hardware info (it has no network calls); (4) ensure you understand any model downloads (GGUF/ollama) and avoid running unfamiliar binaries as root. If the maintainer can provide a verified upstream URL (official project repo/releases) or an explicit install spec using a reputable package host, that would reduce risk and could change this assessment.
Review Dimensions
- Purpose & Capability
- noteThe skill claims to detect hardware and recommend LLMs/quantization, and the included detect.py implements that behavior (psutil, nvidia-smi checks, /proc/version). However, the SKILL.md centers runtime usage on an external tool 'llmfit' (commands like 'llmfit recommend') while the package provides no install mechanism for llmfit and even asserts llmfit is already at /usr/local/bin; relying on an external binary that isn't provided weakens coherence.
- Instruction Scope
- okRuntime instructions and detect.py stay within the stated purpose: they inspect local system state (CPU, RAM, nvidia-smi, /proc/version), produce recommendations, and reference running local model runtimes (ollama, llama.cpp). The SKILL.md does suggest running network-facing installs and model downloads, but it does not instruct reading unrelated secrets or sending detected data to external endpoints.
- Install Mechanism
- concernThere is no formal install spec, but SKILL.md recommends installing llmfit with: curl -fsSL https://llmfit.axjns.dev/install.sh | sh. That is a direct download-and-execute from an unrecognized domain (axjns.dev) — high-risk practice. The skill itself does not include code to fetch that URL, but recommending it without provenance is disproportionate and potentially dangerous.
- Credentials
- okThe skill requests no environment variables, no credentials, and detect.py only queries local system info. There is no inappropriate credential access requested.
- Persistence & Privilege
- okThe skill does not request always:true and is not marked to be force-included. It does not attempt to modify other skills or system-wide configs. Normal autonomous invocation is allowed (platform default).
