Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Model Migrate FlagOS
v1.0.0Migrate a model from the latest vLLM upstream repository into the vllm-plugin-FL project (pinned at vLLM v0.13.0). Use this skill whenever someone wants to a...
⭐ 0· 86·0 current·0 all-time
byFlagos@wbavon
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description align with the provided artifacts: the SKILL.md plus scripts and reference docs implement a copy-then-patch migration pipeline (clone upstream vLLM, copy model files, apply compatibility patches, register in plugin, run validation/benchmark/serve/E2E). The included scripts and docs are coherent with migrating models into a vllm-plugin pinned to v0.13.0.
Instruction Scope
The SKILL.md instructs the agent to perform many privileged or system-wide actions: clone upstream repos, write/copy/patch plugin source files, modify vllm_plugin entrypoints, run pytest, start/stop servers, run benchmarks, and manage remote GT servers via SSH. Operational rules explicitly tell the agent to 'NEVER ask whether to continue', to 'ALWAYS' create a TaskList and auto-resume, and to 'forcefully release GPUs' by killing processes. These behaviors go beyond a narrowly-scoped helper and grant the agent broad discretion to modify local and remote systems without further user confirmation.
Install Mechanism
There is no install spec (instruction-only), and all code files are included in the skill bundle. No external downloads or arbitrary URL extract/install steps are present in the metadata. Risk from install mechanism is low, though the included scripts when executed will change local files and run commands.
Credentials
The skill declares no required env vars or credentials, but the instructions assume access to SSH keys (e.g. instructions to run ssh-copy-id and to use ~/.ssh/id_ed25519), read/write/execute access to the plugin directory, read access to /usr/local/lib (installed vLLM), and access to /models and GPUs. It also instructs setting env vars when invoking vllm (VLLM_USE_DEEP_GEMM, VLLM_FL_PREFER_ENABLED). Those permission and credential needs are substantial and not explicitly declared in requires.env or config paths, which is disproportionate to a simple skill invocation and should be made explicit.
Persistence & Privilege
always:false (good) but the SKILL.md's operational rules demand creating TaskList entries for all 13 steps, auto-resuming work after interruptions, and 'NEVER ask whether to continue.' Combined with normal autonomous invocation this yields a high risk of the agent continuing to make file and system changes (including killing GPU processes and running remote SSH commands) without re-confirmation. The skill also encourages 'work-until-done' behavior that could cause prolonged or destructive activity.
Scan Findings in Context
[base64-block] expected: The E2E test prompts include small inline base64-encoded images for multimodal tests, which explains the base64-block detection. This appears legitimate for multimodal correctness checks, but embedded base64 content is why the scanner flagged it.
What to consider before installing
What to consider before installing or running this skill:
- Functionally coherent: The skill appears to do what it claims (migrating vLLM models and running E2E verification). The included scripts implement the pipeline end-to-end.
- High-privilege actions: The instructions expect to read/write plugin source, run tests, start/stop local servers, manage a remote GT server over SSH, and forcibly kill GPU-using processes (nvidia-smi | xargs kill -9). These can affect other users/processes and system state.
- Automation without confirmation: The skill's operational rules explicitly tell the agent to auto-resume and to 'NEVER ask whether to continue', and to 'work-until-done'. If you allow the agent to run autonomously with this skill enabled, it may continue making changes without further prompts.
- Missing declared credentials: The skill does not declare required env variables or credentials, yet it assumes SSH key access and permission to read /usr/local/lib, /models, and modify the plugin directory. Expect to provide or confirm SSH access and to run in an environment where these assumptions are acceptable.
Recommendations:
1. Review the code before running: inspect scripts e2e_remote_serve.sh, validate_migration.py, serve.sh, run-request.sh, and any scripts that execute shell commands (look for any network endpoints or unexpected commands).
2. Run in an isolated environment: execute the migration in a disposable VM or container where you control SSH keys, GPU processes, and filesystem snapshots. Back up your vllm-plugin-FL repo first (git branch or clone).
3. Require manual confirmation: if you let an agent use this skill, configure it NOT to auto-resume unattended or to require user approval before steps that modify files, kill processes, or SSH to remote hosts.
4. Validate SSH usage: do not blindly run ssh-copy-id or any script that writes to ~/.ssh/authorized_keys without verifying the target host and keys.
5. If you need more assurance: ask the skill author for provenance (source repo URL, maintainer identity) and for an explicit list of all commands the skill will run, so you can audit them.
Given these factors, treat the skill as useful but potentially risky — proceed only after inspection and with controls in place (isolated environment, backups, manual confirmations).Like a lobster shell, security has layers — review code before you run it.
latestvk972tfe72kv3dyvyg0v72x2x2h83esac
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
