Llm Models

PassAudited by ClawScan on May 1, 2026.

Overview

This looks like a straightforward LLM-provider skill, but users should review the external CLI install, login, and third-party prompt sharing before use.

Install this only if you trust inference.sh/OpenRouter, verify the CLI installation path if possible, and avoid sending sensitive prompts or files unless you are comfortable with the external provider's data handling and billing model.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the CLI gives software from an external service access to run on the user's machine.

Why it was flagged

The setup path depends on downloading and executing a remote install script. This is disclosed and purpose-aligned for installing the CLI, but users should still verify the source and checksum.

Skill content
curl -fsSL https://cli.inference.sh | sh && infsh login
Recommendation

Prefer the documented manual install and checksum verification if possible, and install only if you trust inference.sh.

What this means

An agent using this skill may run infsh commands under the user's logged-in account, including model calls that could use quota or incur cost.

Why it was flagged

The skill grants the agent access to the infsh CLI via a wildcard command pattern. This is central to the skill's purpose, but it is broader than only the listed OpenRouter model examples.

Skill content
allowed-tools: Bash(infsh *)
Recommendation

Review intended infsh commands, use account spending limits where available, and avoid leaving the CLI logged into accounts you do not want the agent to use.

What this means

The agent may be able to use the logged-in inference.sh/OpenRouter account for model requests.

Why it was flagged

The skill requires a credentialed CLI login even though the registry metadata lists no primary credential. This is expected for provider-backed LLM access, but it is still account authority.

Skill content
infsh login
Recommendation

Log in only with an account you intend this skill to use, and review the account's billing, quota, and access controls.

What this means

Prompts, system prompts, and any included context may be sent to external model providers.

Why it was flagged

The examples show user prompts being sent through the inference.sh/OpenRouter provider path to external LLMs. That is the core feature, but it creates a third-party data boundary.

Skill content
infsh app run openrouter/claude-sonnet-45 --input '{"prompt": "Explain quantum computing"}'
Recommendation

Do not send secrets, private documents, or regulated data unless you have reviewed the provider's privacy, retention, and routing policies.