Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

ModelReady

Start using a local or Hugging Face model instantly, directly from chat.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.1k · 2 current installs · 2 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The script implements exactly the advertised functionality (starting a vLLM/openai-style server and proxying chat requests). However the declared requirements in the registry/metadata are incomplete: the runtime requires python3 and the vllm Python package, but 'python3' and vllm are not listed in required binaries or install specs. SKILL.md metadata also lists an env var 'URL' that isn't used as a required external credential. These mismatches mean the skill's stated requirements do not match what it actually needs to work.
!
Instruction Scope
Instructions and the script read/write files under $HOME/.model2skill (defaults.env, pid/log files) which is reasonable. However the script binds by default to HOST=0.0.0.0 (DEFAULT_HOST) exposing the OpenAI-compatible endpoint to the network/LAN unless changed; SKILL.md does not warn about this. The skill will start an unauthenticated HTTP API that, if reachable, could be invoked by other machines on the network. The chat path uses local HTTP requests only (no remote exfiltration), but exposing a model endpoint broadly is a security/privacy concern.
Install Mechanism
There is no install spec and no code is downloaded — the skill is instruction+script only. That is low-risk from supply-chain perspective but problematic operationally: the script expects python3 and the 'vllm' package to be available. The skill does not provide installation steps or check for vllm; a user may run it and see failures or run an untrusted vllm binary if present.
Credentials
The skill does not request external credentials and only writes a small defaults file under ~/.model2skill. It does use HOME and network information (hostname/IP) to resolve bind targets. The SKILL.md metadata lists an 'URL' env entry that is inconsistent with the rest of the package; otherwise there are no unexplained SECRET/TOKEN env requirements.
Persistence & Privilege
The skill does persist state to $HOME/.model2skill (defaults, logs, PID files) which is expected for a local server manager. It does not request always:true, does not modify other skills, and does not request elevated privileges.
What to consider before installing
This skill appears to implement a local vLLM/OpenAI-compatible server manager, but inspect and take the following precautions before installing/using it: - Dependencies: the script requires python3 and the 'vllm' Python package (and any of vllm's GPU/runtime dependencies). The registry metadata did not declare python3 or vllm and the skill provides no install steps. Only proceed if you install and trust vllm yourself. - Network exposure: default host is 0.0.0.0 which binds to all interfaces and makes the endpoint reachable from other machines on the LAN. Set the host to 127.0.0.1 (use /modelready set_ip ip=127.0.0.1) or run behind a firewall if you want localhost-only access. - Authentication: the server started is OpenAI-compatible but unauthenticated by this wrapper. Do not start models with sensitive data or on a public network without adding access controls. - Files written: the skill writes ~/.model2skill/defaults.env, PID and log files. Review these files for any sensitive content and for persistence you may want to remove them on uninstall. - Input handling: the script accepts EXTRA args and passes them to vllm; be careful when using extra=... to avoid unintended behavior. If the author provided: (1) explicit dependency list (python3, vllm, required versions), (2) an install or dependency-check step, and (3) a safer default bind (127.0.0.1) or an option to require authentication, this assessment could be upgraded to 'benign'. As-is, the mismatches and default network exposure make the skill 'suspicious.'

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk979d5e50xkerk30475d76ead580qggd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Binsbash, curl

SKILL.md

ModelReady

ModelReady lets you start using a local or Hugging Face model immediately, without leaving clawdbot.

It turns a model into a running, OpenAI-compatible endpoint and allows you to chat with it directly from a conversation.

When to use

Use this skill when you want to:

  • Quickly start using a local or Hugging Face model
  • Chat with a locally running model
  • Test or interact with a model directly from chat

Commands

Start a model server

/modelready start repo=<path-or-hf-repo> port=<port> [tp=<n>] [dtype=<dtype>]

Examples:

/modelready start repo=Qwen/Qwen2.5-7B-Instruct port=19001
/modelready start repo=/home/user/models/Qwen-2.5 port=8010 tp=4 dtype=bfloat16

Chat with a running model

/modelready chat port=<port> text="<message>"

Example:

/modelready chat port=8010 text="hello"

Check status or stop the server

/modelready status port=<port>
/modelready stop port=<port>

Set default host or port

/modelready set_ip   ip=<host>
/modelready set_port port=<port>

Notes

  • The model is served locally using vLLM.
  • The exposed endpoint follows the OpenAI API format.
  • The server must be started before sending chat requests.

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…