{"skill":{"slug":"vllm","displayName":"Vllm","summary":"vLLM 推理引擎助手，精通高性能 LLM 部署、PagedAttention、OpenAI 兼容 API","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":323,"installsAllTime":1,"installsCurrent":1,"stars":0,"versions":1},"createdAt":1774159034664,"updatedAt":1774159314357},"latestVersion":{"version":"1.0.0","createdAt":1774159034664,"changelog":"vLLM skill 1.0.0 initial release:\n\n- 提供高性能 LLM 推理引擎助手，专注于大模型部署与优化\n- 支持 PagedAttention、Continuous Batching、Prefix Caching、Speculative Decoding 等核心特性\n- 兼容 OpenAI API，主流模型与多种量化方式支持\n- 包含详细安装、部署、关键参数与常见问题指引\n- 对比 Ollama、TGI、llama.cpp，定位生产级高吞吐服务端场景","license":"MIT-0"},"metadata":{"os":null,"systems":null},"owner":{"handle":"zhangifonly","userId":"publishers:zhangifonly","displayName":"zhangifonly","image":"https://avatars.githubusercontent.com/u/121886591?v=4"},"moderation":null}