Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Gpu Deploy
v0.1.0在 GPU 服务器上部署 vLLM 模型服务。支持多服务器配置,自动检查 GPU 和端口占用,一键部署流行的开源模型。
⭐ 0· 418·2 current·2 all-time
by军舰@wang-junjian
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description (deploy vLLM to GPU servers) matches the instructions: SSH into hosts, check GPUs/ports, and run vllm serve. Requiring ssh is appropriate. Minor inconsistency: the README and examples reference a local 'gpu-deploy' script to put on PATH, but no such script is bundled in this package (skill is instruction-only).
Instruction Scope
Runtime instructions are narrowly scoped to remote operations over SSH (nvidia-smi, lsof, tmux + conda + vllm serve). They do not attempt to read unrelated local files or exfiltrate data. Note that many commands assume specific paths (e.g., /data/miniconda3, /data/models/llm) and elevated access on remote hosts; users should verify and adapt these before running.
Install Mechanism
There is no install spec (instruction-only), which reduces install-time risk. However, documentation suggests copying a 'gpu-deploy' script into ~/.local/bin, yet no script is provided in the files — the skill will not install a helper binary for you.
Credentials
No environment variables, secrets, or config paths are requested. SSH-based access is implied (user/host in servers.json) which is appropriate for remote deployment; no unrelated credentials are asked for.
Persistence & Privilege
always:false and no install/spec writing to system-wide configs. The skill does not request persistent elevated privileges or attempt to modify other skills' configurations.
Assessment
This skill appears to be what it says: a set of instructions for deploying vLLM via SSH. Before using it, verify the following: (1) There is no provided 'gpu-deploy' script — either create/obtain a trusted script or run the shown SSH commands manually. (2) Confirm remote paths (conda path, /data/models/llm) and the user account used for SSH have the necessary permissions. (3) Inspect any commands you copy/paste, especially the tmux/conda/vllm serve line, to ensure the model path and port are correct. (4) Use SSH keys and least-privilege accounts; do not run unknown commands on hosts you don't control. (5) Verify model binaries/download sources (Hugging Face links) independently and ensure vLLM and dependencies on the host are from trusted sources. If you need the convenience script, request a packaged implementation from the maintainer or review its content before adding it to your PATH.Like a lobster shell, security has layers — review code before you run it.
aivk970xn61spjdkeqm5k5yd55xj5821cnwdeploymentvk970xn61spjdkeqm5k5yd55xj5821cnwgpuvk970xn61spjdkeqm5k5yd55xj5821cnwlatestvk970xn61spjdkeqm5k5yd55xj5821cnwmodel-servingvk970xn61spjdkeqm5k5yd55xj5821cnwvllmvk970xn61spjdkeqm5k5yd55xj5821cnw
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🚀 Clawdis
Binsssh
