Gpu Deploy
PassAudited by VirusTotal on May 11, 2026.
Overview
Type: OpenClaw Skill Name: gpu-deploy Version: 0.1.0 The skill is classified as suspicious due to its core functionality involving remote command execution via SSH, as detailed in `SKILL.md`. While the explicit use of `ssh` for deploying services on remote GPU servers aligns with the skill's stated purpose, the actual `gpu-deploy` script (which would construct and execute these commands based on user input) is not provided. This creates a significant risk of shell injection vulnerabilities if user inputs (e.g., model names, server details, ports) are not rigorously sanitized before being interpolated into the complex `ssh` commands shown in the '手动使用(无脚本)' section of `SKILL.md`. There is no evidence of intentional malicious behavior like data exfiltration or malicious prompt injection in the provided files, but the high-risk nature of remote execution and the potential for vulnerabilities in the missing implementation warrant a 'suspicious' classification.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If used on the wrong host or port, the agent could start services on an unintended GPU server.
The skill documents remote shell execution over SSH to start a vLLM service, which is expected for deployment but can change remote server state.
ssh <user>@<host> "tmux new-session -d -s vllm '... vllm serve ... --port 8111 ...'"
Confirm the target server, model, and port before running deploy or stop commands.
Commands run with whatever permissions the configured SSH account has on the remote server.
The skill relies on the user's SSH access to remote GPU servers; this is purpose-aligned but uses delegated account privileges.
ssh <user>@<host> nvidia-smi
Use a least-privileged SSH account and configure only servers you intend the agent to manage.
If you obtain a gpu-deploy script from somewhere else, that separate code is outside this review and could behave differently from the documentation.
The README references a gpu-deploy executable, but the provided package contains only README.md and SKILL.md with no install spec or script to review.
cp gpu-deploy ~/.local/bin/ chmod +x ~/.local/bin/gpu-deploy
Review or source the gpu-deploy script from a trusted repository before placing it on PATH.
Applications calling the service may believe they are using GPT-4o-mini when they are actually using a different local model.
The example serves a DeepSeek model under the name gpt-4o-mini, which may be intentional for API compatibility but could mislead downstream users or tools about the actual model.
vllm serve ... DeepSeek-R1-Distill-Qwen-32B-AWQ/ ... --served-model-name gpt-4o-mini
Set served_model_name to an accurate or clearly documented alias for the deployed model.
A deployed service may continue consuming GPU resources after the initiating task ends.
The skill intentionally recommends keeping the model service running in the background; this persistence is disclosed and expected for model serving.
后台运行 - 建议使用 tmux/screen 保持服务运行
Monitor running services and use the documented stop command or tmux controls when the service is no longer needed.
