Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Hardware LLM Optimizer v2 (llmfit)

v2.0.0

AI硬件LLM推荐工具 - 基于llmfit内核。自动检测CPU/GPU/RAM/VRAM → 智能推荐最适合的大模型 + 量化方案 + 速度估算。支持100+模型库,内置TUI界面和硬件模拟。

0· 68·0 current·0 all-time
bySMS@smseow001

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for smseow001/hardware-llm-optimizer-v2.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Hardware LLM Optimizer v2 (llmfit)" (smseow001/hardware-llm-optimizer-v2) from ClawHub.
Skill page: https://clawhub.ai/smseow001/hardware-llm-optimizer-v2
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install hardware-llm-optimizer-v2

ClawHub CLI

Package manager switcher

npx clawhub@latest install hardware-llm-optimizer-v2
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill claims to detect hardware and recommend LLMs/quantization, and the included detect.py implements that behavior (psutil, nvidia-smi checks, /proc/version). However, the SKILL.md centers runtime usage on an external tool 'llmfit' (commands like 'llmfit recommend') while the package provides no install mechanism for llmfit and even asserts llmfit is already at /usr/local/bin; relying on an external binary that isn't provided weakens coherence.
Instruction Scope
Runtime instructions and detect.py stay within the stated purpose: they inspect local system state (CPU, RAM, nvidia-smi, /proc/version), produce recommendations, and reference running local model runtimes (ollama, llama.cpp). The SKILL.md does suggest running network-facing installs and model downloads, but it does not instruct reading unrelated secrets or sending detected data to external endpoints.
!
Install Mechanism
There is no formal install spec, but SKILL.md recommends installing llmfit with: curl -fsSL https://llmfit.axjns.dev/install.sh | sh. That is a direct download-and-execute from an unrecognized domain (axjns.dev) — high-risk practice. The skill itself does not include code to fetch that URL, but recommending it without provenance is disproportionate and potentially dangerous.
Credentials
The skill requests no environment variables, no credentials, and detect.py only queries local system info. There is no inappropriate credential access requested.
Persistence & Privilege
The skill does not request always:true and is not marked to be force-included. It does not attempt to modify other skills or system-wide configs. Normal autonomous invocation is allowed (platform default).
What to consider before installing
This skill's detection code (detect.py) appears benign and aligned with the description: it inspects local hardware via psutil and nvidia-smi and prints recommendations. However, the SKILL.md asks you to install 'llmfit' by piping a script from https://llmfit.axjns.dev/install.sh directly into sh — that pattern is high-risk because it runs arbitrary code from an unvetted host. Before installing or running anything: (1) do NOT run curl ... | sh without inspecting the script; fetch the URL and review its contents first; (2) prefer installing llmfit from a known official source (GitHub releases, vendor homepage) or verify the domain and script integrity; (3) run detect.py locally in a restricted environment if you only want hardware info (it has no network calls); (4) ensure you understand any model downloads (GGUF/ollama) and avoid running unfamiliar binaries as root. If the maintainer can provide a verified upstream URL (official project repo/releases) or an explicit install spec using a reputable package host, that would reduce risk and could change this assessment.

Like a lobster shell, security has layers — review code before you run it.

aivk970atzpw5qc87xrgs3g0g96qn851jezgpuvk970atzpw5qc87xrgs3g0g96qn851jezhardwarevk970atzpw5qc87xrgs3g0g96qn851jezlatestvk970atzpw5qc87xrgs3g0g96qn851jezllmvk970atzpw5qc87xrgs3g0g96qn851jezllmfitvk970atzpw5qc87xrgs3g0g96qn851jeznvidiavk970atzpw5qc87xrgs3g0g96qn851jezollamavk970atzpw5qc87xrgs3g0g96qn851jezoptimizationvk970atzpw5qc87xrgs3g0g96qn851jezquantizationvk970atzpw5qc87xrgs3g0g96qn851jez
68downloads
0stars
1versions
Updated 1w ago
v2.0.0
MIT-0

Hardware LLM Optimizer v2.0

基于 llmfit 的智能硬件LLM推荐工具

安装状态

llmfit 已安装在: /usr/local/bin/llmfit

快速使用

当用户问"能跑什么大模型"、"推荐LLM"、"硬件检测"时使用:

1. 查看推荐模型

llmfit recommend

2. 查看所有推荐(JSON格式,便于解析)

llmfit recommend --json

3. 按用途筛选

llmfit recommend --use-case coding
llmfit recommend --use-case chat
llmfit recommend --use-case general
llmfit recommend --use-case embedding

4. 硬件模拟(模拟不同配置)

# 模拟 16GB 显存
llmfit recommend --memory 16G

# 模拟 32GB 显存 + 64GB RAM
llmfit recommend --memory 32G --ram 64G

5. 交互式TUI(需要终端)

llmfit

输出字段说明

字段含义
name模型名称
parameter_count参数量
best_quant推荐量化方案
score综合评分(越高越好)
estimated_tps预估速度(tok/s)
memory_required_gb所需显存
run_mode运行模式(GPU/CPU/MoE)
fit_level匹配度(Perfect/Good/Marginal)

量化方案参考

量化质量速度适用场景
FP16最高最慢大显存GPU
Q8_0很高较快中等显存
Q6_K6-8GB显存
Q4_K_M中高最快4-6GB显存
Q2_K最快<4GB显存

本地运行命令

安装Ollama模型

ollama run <model-name>

使用llama.cpp

# 下载GGUF后
./llama.cpp -m <model.gguf> --rompt <prompt>

最低配置参考(来自llmfit)

显存推荐模型量化
2GBPhi-3-mini, Gemma-2BQ4
4GBLlama3-8B, Qwen-7BQ4
6GBLlama2-13B, Mistral-7BQ4/Q6
8GBLlama2-13B, Yi-9BQ5/Q6
12GBLlama2-34BQ4
16GBLlama2-34B, Qwen-72BQ4
24GB+70B大模型Q4/Q8

安装llmfit(如需)

curl -fsSL https://llmfit.axjns.dev/install.sh | sh

优势对比

功能v1.0v2.0 (llmfit)
模型库手动查表100+自动匹配
量化推荐简单估算智能最优
速度估算
下载源✅ GGUF
硬件模拟
TUI界面
多GPU
MoE支持

Powered by llmfit | Updated: 2026-04-17

Comments

Loading comments...