Install
openclaw skills install local-model-quantization-routerRecommend local LLM model routes and quantization levels using hardware, privacy, task complexity, context size, and budget constraints. Use for Qwen/Ollama/...
openclaw skills install local-model-quantization-routerUse this skill to choose between local quantized models and cloud fallback before running OpenClaw workloads.
scripts/local_model_quantization_router.py with CLI flags or JSON input.--task TEXT: Task summary.--complexity {simple,standard,complex,critical}.--privacy {low,normal,high,regulated}.--vram-gb FLOAT: GPU memory available.--ram-gb FLOAT: System memory available.--context-tokens INT: Required context window.--hardware PATH: Optional JSON with vram_gb, ram_gb, cpu_only.--output PATH: Optional JSON output path.route: local-only, local-first, hybrid, or cloud-required.model: Recommended model family.quantization: Suggested quant level.endpoint: Suggested local endpoint type.fallback: Safer fallback when quality or context is insufficient.reasons: Evidence for the decision.No model is downloaded and no config file is changed.