{"skill":{"slug":"gpu-deploy","displayName":"Gpu Deploy","summary":"在 GPU 服务器上部署 vLLM 模型服务。支持多服务器配置，自动检查 GPU 和端口占用，一键部署流行的开源模型。","tags":{"ai":"0.1.0","deployment":"0.1.0","gpu":"0.1.0","latest":"0.1.0","model-serving":"0.1.0","vllm":"0.1.0"},"stats":{"comments":0,"downloads":521,"installsAllTime":2,"installsCurrent":2,"stars":0,"versions":1},"createdAt":1772307331035,"updatedAt":1777525478831},"latestVersion":{"version":"0.1.0","createdAt":1772307331035,"changelog":"Initial release of the gpu-deploy skill.\n\n- Deploy vLLM model services on GPU servers with multi-server support.\n- Automated GPU status and port availability checks.\n- Preset configurations for popular open-source models.\n- One-command deployment and management (check, deploy, list, ps, stop).\n- Custom model configuration supported via JSON files.","license":null},"metadata":{"os":null,"systems":null},"owner":{"handle":"wang-junjian","userId":"s1740etxys287wym5mzgk9ts9x83sgph","displayName":"军舰","image":"https://avatars.githubusercontent.com/u/38390774?v=4"},"moderation":null}