Llm Models

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: llm-models Version: 0.1.5 The skill bundle is classified as suspicious primarily due to the `curl -fsSL https://cli.inference.sh | sh` command in `SKILL.md`. While this is presented as a legitimate installation method for the `inference.sh` CLI, executing arbitrary remote scripts via `curl | sh` is a significant supply chain vulnerability that could lead to Remote Code Execution (RCE) if the `cli.inference.sh` domain were compromised. Although the `SKILL.md` attempts to explain the script's benign function and mentions checksum verification, the method itself carries inherent risk. There is no evidence of direct malicious intent (e.g., data exfiltration, backdoors, or prompt injection against the agent for harmful objectives) within the provided files, and the `allowed-tools: Bash(infsh *)` directive limits the agent's shell access, but the installation method is a critical vulnerability.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the CLI gives software from an external service access to run on the user's machine.

Why it was flagged

The setup path depends on downloading and executing a remote install script. This is disclosed and purpose-aligned for installing the CLI, but users should still verify the source and checksum.

Skill content
curl -fsSL https://cli.inference.sh | sh && infsh login
Recommendation

Prefer the documented manual install and checksum verification if possible, and install only if you trust inference.sh.

What this means

An agent using this skill may run infsh commands under the user's logged-in account, including model calls that could use quota or incur cost.

Why it was flagged

The skill grants the agent access to the infsh CLI via a wildcard command pattern. This is central to the skill's purpose, but it is broader than only the listed OpenRouter model examples.

Skill content
allowed-tools: Bash(infsh *)
Recommendation

Review intended infsh commands, use account spending limits where available, and avoid leaving the CLI logged into accounts you do not want the agent to use.

What this means

The agent may be able to use the logged-in inference.sh/OpenRouter account for model requests.

Why it was flagged

The skill requires a credentialed CLI login even though the registry metadata lists no primary credential. This is expected for provider-backed LLM access, but it is still account authority.

Skill content
infsh login
Recommendation

Log in only with an account you intend this skill to use, and review the account's billing, quota, and access controls.

What this means

Prompts, system prompts, and any included context may be sent to external model providers.

Why it was flagged

The examples show user prompts being sent through the inference.sh/OpenRouter provider path to external LLMs. That is the core feature, but it creates a third-party data boundary.

Skill content
infsh app run openrouter/claude-sonnet-45 --input '{"prompt": "Explain quantum computing"}'
Recommendation

Do not send secrets, private documents, or regulated data unless you have reviewed the provider's privacy, retention, and routing policies.