Vllm

vLLM 推理引擎助手,精通高性能 LLM 部署、PagedAttention、OpenAI 兼容 API

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 98 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (vLLM inference engine, OpenAI-compatible API, PagedAttention, deployment tips) match the SKILL.md content: installation commands, docker usage, 'vllm serve' examples, model/parameter guidance and troubleshooting. Nothing requested or described appears unrelated to deploying or running vLLM.
Instruction Scope
The SKILL.md instructs the user to run pip install and docker run commands and to mount ~/.cache/huggingface into the container, expose port 8000, and enable GPU access. Those actions are expected for deploying vLLM, but they do grant the runtime container access to local model cache data and expose a server port. The instructions do not attempt to read arbitrary system files or request unrelated secrets, but they do assume access to GPU/CUDA and local caches.
Install Mechanism
This is instruction-only (no install spec or code files). However, the commands instruct users to pip install 'vllm' and to pull/run the 'vllm/vllm-openai:latest' Docker image. Those operations will download and execute third-party code (PyPI package and Docker image) — standard for this use case but worth verifying the sources before running in production.
Credentials
The skill declares no required environment variables or credentials (proportional). The only potential exposure is mounting ~/.cache/huggingface into the container (recommended in the doc) which may expose cached model files and any credentials stored there; also private models may require tokens kept elsewhere. No unexplained credential requests exist.
Persistence & Privilege
always is false and there is no install-time code or persistent modifications described. The skill does not request persistent agent privileges or modify other skills' configuration.
Assessment
This skill is a coherent deployment guide for vLLM, but follow safe practices before running the recommended commands: 1) Verify the pip package and Docker image sources (vllm on PyPI, vllm/vllm-openai on Docker Hub) and prefer pinned versions in production rather than 'latest'. 2) Be aware that mounting ~/.cache/huggingface into a container gives that container access to your local cached models and artifacts — avoid mounting sensitive directories or tokens. 3) Exposing port 8000 will make the model server reachable from the host/network; restrict binding or use firewalls/auth if you don't want public access. 4) The commands download and execute third-party code (pip/docker) and require GPU/CUDA compatibility; run them in an isolated environment if you want to limit blast radius. 5) If you plan to load private models, confirm where your model tokens are stored and avoid unintentionally exposing them to containers. These precautions will reduce risk while using this (otherwise coherent) skill.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk978ty7rd9ech79t7qtkpm4vm183dj4f

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

vLLM 高性能推理引擎助手

你是 vLLM 部署和优化领域的专家,帮助用户高效部署和运行大语言模型。

核心优势

特性说明
PagedAttention类似操作系统虚拟内存的 KV Cache 管理,显存利用率提升 2-4 倍
连续批处理Continuous Batching,动态合并请求,吞吐量远超静态批处理
高吞吐相比 HuggingFace Transformers 推理速度提升 14-24 倍
Prefix Caching自动缓存公共前缀,多轮对话和共享系统提示词场景加速明显
投机解码Speculative Decoding,用小模型加速大模型生成

安装部署

pip install vllm  # 需要 CUDA 12.1+

# Docker 部署(推荐生产环境)
docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    -p 8000:8000 vllm/vllm-openai:latest \
    --model meta-llama/Llama-3.1-8B-Instruct

OpenAI 兼容 API 服务器

# 基础启动
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000

# 生产环境推荐配置
vllm serve Qwen/Qwen2.5-72B-Instruct \
    --tensor-parallel-size 4 \
    --max-model-len 32768 \
    --gpu-memory-utilization 0.9 \
    --enable-prefix-caching \
    --max-num-seqs 256 --port 8000

支持的主流模型

模型系列代表模型参数量
Llama 3.1meta-llama/Llama-3.1-8B-Instruct8B/70B/405B
Qwen 2.5Qwen/Qwen2.5-7B-Instruct0.5B-72B
DeepSeek V3deepseek-ai/DeepSeek-V3671B (MoE)
Mistralmistralai/Mistral-7B-Instruct-v0.37B
ChatGLMTHUDM/glm-4-9b-chat9B
Gemma 2google/gemma-2-27b-it2B/9B/27B

关键参数详解

参数默认值说明
--tensor-parallel-size1张量并行 GPU 数,多卡必设
--max-model-len模型默认最大上下文长度,降低可省显存
--gpu-memory-utilization0.9GPU 显存使用比例,0.0-1.0
--max-num-seqs256最大并发序列数
--dtypeauto数据类型:auto/half/float16/bfloat16
--quantizationNone量化方式:awq/gptq/fp8/squeezellm
--enable-prefix-cachingFalse启用前缀缓存,多轮对话推荐开启

量化支持

量化方式精度损失显存节省说明
FP16/BF16基准默认精度
AWQ极小~50%推荐,4bit 量化,需预量化模型
GPTQ~50%经典方案,社区模型多
FP8极小~50%H100/L40S 原生支持,推荐新硬件
vllm serve TheBloke/Llama-2-70B-Chat-AWQ --quantization awq

与同类工具对比

特性vLLMOllamaTGIllama.cpp
定位生产级高吞吐推理本地便捷运行HuggingFace 官方CPU/边缘推理
吞吐量极高中等低-中
多卡支持原生 TP/PP不支持支持有限
量化AWQ/GPTQ/FP8GGUFAWQ/GPTQ/BnBGGUF 专精
适用场景服务端大规模部署个人本地使用HF 生态集成低资源设备

常见问题排查

  • OOM 错误:降低 --max-model-len--gpu-memory-utilization
  • 模型加载慢:使用 --load-format safetensors,确保本地有缓存
  • 多卡不均衡:检查 CUDA_VISIBLE_DEVICES 和 NVLink 拓扑
  • 输出乱码:确认模型和 tokenizer 版本匹配,检查 chat template

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…