Hardware Llm Optimizer

v1.1.0

Auto-detect PC hardware (CPU/GPU/RAM/VRAM) -> Determine max LLM parameters -> Recommend models (3B/7B/8B/13B/34B/70B) + quantization + deployment tools + bot...

0· 83·0 current·0 all-time
bySMS@smseow001

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for smseow001/hardware-llm-optimizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Hardware Llm Optimizer" (smseow001/hardware-llm-optimizer) from ClawHub.
Skill page: https://clawhub.ai/smseow001/hardware-llm-optimizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install hardware-llm-optimizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install hardware-llm-optimizer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (detect hardware, recommend models/quantization/deployment) align with included files: SKILL.md instructs running detect.py and detect.py inspects CPU, RAM, GPU and gives model/quantization recommendations. No unrelated credentials, binaries, or services are requested.
Instruction Scope
SKILL.md directs the agent to run the bundled detect.py. The script only reads local system info (psutil), checks /proc/version for WSL, and calls nvidia-smi for GPU details — all within the stated purpose. It does not access external endpoints, env vars, or other user files beyond /proc/version.
Install Mechanism
No install spec; this is instruction-only with a bundled Python script. The only required third-party package is psutil (documented in SKILL.md). The script does not download or execute code from remote URLs.
Credentials
No environment variables, credentials, or config paths are requested. The script uses local APIs and optional nvidia-smi; this is proportionate to hardware-detection functionality.
Persistence & Privilege
Skill is not always-enabled, does not attempt to modify other skills or agent-wide settings, and does not persist credentials. It merely prints local detection results.
Assessment
This skill appears to do what it claims: run locally, detect hardware, and print recommendations. Before installing/running: (1) review the detect.py yourself (it's included) and confirm you're comfortable executing it, (2) be aware it will reveal local hardware details (CPU/GPU/RAM) to whatever component runs the skill — if your agent forwards outputs to external services, those details could be transmitted, and (3) install psutil (pip install psutil) and ensure nvidia-smi is available if you want GPU detection. If you want extra caution, run the script in a local sandbox or VM first.

Like a lobster shell, security has layers — review code before you run it.

aivk97bh2z103zrwhnxet19bp5je5851h33chinesevk97bh2z103zrwhnxet19bp5je5851h33hardwarevk97bh2z103zrwhnxet19bp5je5851h33latestvk97bh2z103zrwhnxet19bp5je5851h33llmvk97bh2z103zrwhnxet19bp5je5851h33nvidiavk97bh2z103zrwhnxet19bp5je5851h33optimizationvk97bh2z103zrwhnxet19bp5je5851h33
83downloads
0stars
2versions
Updated 1w ago
v1.1.0
MIT-0

Hardware LLM Optimizer

Detects PC hardware configuration and recommends which large language models can run.

Features

  1. Auto-detect: CPU, RAM, GPU (NVIDIA/AMD), VRAM
  2. Calculate: Maximum runnable model size
  3. Quantization: FP16 / 8bit / 4bit / 2bit recommendation
  4. Model Suggestion: Llama 2/3, Qwen, Mistral, Phi, Gemma, Yi, etc.
  5. Bottleneck Analysis: System constraint diagnosis
  6. Deployment Tools: Ollama, Llama.cpp, vLLM, Chatbox
  7. Optimization Tips: Low VRAM solutions
  8. Minimum Config Table: 3B/7B/13B/34B/70B requirements

Usage

When user asks about running LLMs on their computer:

检测电脑配置
大模型推荐
能跑什么模型
硬件检测
LLM优化

Quick Run

python3 skills/hardware-llm-optimizer/detect.py

Requirements

  • Python 3.8+
  • psutil: pip install psutil
  • nvidia-smi (optional, for NVIDIA GPU detection)

Minimum Config Reference

ModelMin VRAMRec VRAMQuantization
3B2GB4GBQ4
7B6GB8GBQ4/Q8
13B10GB16GBQ4/Q8
34B20GB32GBQ4
70B40GB80GBQ4

Chinese Interface

This skill outputs in Chinese for user convenience.

Comments

Loading comments...