Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Llm Models

Access Claude, Gemini, Kimi, GLM and 100+ LLMs via inference.sh CLI using OpenRouter. Models: Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5, Gemini 3...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.1k · 3 current installs · 3 all-time installs
byÖmer Karışman@okaris
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (access many LLMs via inference.sh/OpenRouter) matches the SKILL.md examples and commands. Required binaries/env vars are not declared because the skill is instruction-only and expects the inference.sh CLI and an account login, which is proportionate to the stated purpose.
Instruction Scope
Instructions are narrowly scoped to installing the inference.sh CLI, running login, and calling infsh app run/sample commands. They do not ask the agent to read arbitrary system files or unrelated credentials, but they do instruct running a remote installer (curl | sh) and performing an interactive login (implying you will provide credentials).
!
Install Mechanism
The SKILL.md tells users to run a remote installer via curl -fsSL https://cli.inference.sh | sh which downloads and installs a binary from dist.inference.sh. The doc mentions SHA-256 checksums, but running a piped install is a moderate risk — the installer writes executables to disk and you should verify checksums and the script contents before executing.
Credentials
No environment variables are declared in the registry metadata, yet the instructions require an interactive 'infsh login' (so you'll supply an API key or account credentials). This is expected for a CLI that talks to OpenRouter/inference.sh and is not disproportionate, but the skill does not declare the auth variables or storage location explicitly.
Persistence & Privilege
always is false and there are no OS restrictions. The installer will create a local CLI binary (persistent on disk), which is normal for a CLI-based integration. The skill does not request elevated platform privileges or modify other skills.
Assessment
This skill appears to be what it claims: a guide to using the inference.sh CLI to access many models. Before installing: 1) Inspect the installer script at https://cli.inference.sh (do not blindly run curl | sh). 2) Verify the downloaded binary's SHA-256 against the published checksums on dist.inference.sh. 3) Use API credentials only for the intended provider (inference.sh/OpenRouter) and avoid reusing high-privilege keys. 4) Consider installing in a sandbox or container if you want to limit risk. If you need the skill to run non-interactively, ask the author how credentials are expected to be provided and stored.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.5
Download zip
latestvk972x2phc41rr554ry553d27f981d5nf

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

LLM Models via OpenRouter

Access 100+ language models via inference.sh CLI.

LLM Models via OpenRouter

Quick Start

curl -fsSL https://cli.inference.sh | sh && infsh login

# Call Claude Sonnet
infsh app run openrouter/claude-sonnet-45 --input '{"prompt": "Explain quantum computing"}'

Install note: The install script only detects your OS/architecture, downloads the matching binary from dist.inference.sh, and verifies its SHA-256 checksum. No elevated permissions or background processes. Manual install & verification available.

Available Models

ModelApp IDBest For
Claude Opus 4.5openrouter/claude-opus-45Complex reasoning, coding
Claude Sonnet 4.5openrouter/claude-sonnet-45Balanced performance
Claude Haiku 4.5openrouter/claude-haiku-45Fast, economical
Gemini 3 Proopenrouter/gemini-3-pro-previewGoogle's latest
Kimi K2 Thinkingopenrouter/kimi-k2-thinkingMulti-step reasoning
GLM-4.6openrouter/glm-46Open-source, coding
Intellect 3openrouter/intellect-3General purpose
Any Modelopenrouter/any-modelAuto-selects best option

Search LLM Apps

infsh app list --search "openrouter"
infsh app list --search "claude"

Examples

Claude Opus (Best Quality)

infsh app run openrouter/claude-opus-45 --input '{
  "prompt": "Write a Python function to detect palindromes with comprehensive tests"
}'

Claude Sonnet (Balanced)

infsh app run openrouter/claude-sonnet-45 --input '{
  "prompt": "Summarize the key concepts of machine learning"
}'

Claude Haiku (Fast & Cheap)

infsh app run openrouter/claude-haiku-45 --input '{
  "prompt": "Translate this to French: Hello, how are you?"
}'

Kimi K2 (Thinking Agent)

infsh app run openrouter/kimi-k2-thinking --input '{
  "prompt": "Plan a step-by-step approach to build a web scraper"
}'

Any Model (Auto-Select)

# Automatically picks the most cost-effective model
infsh app run openrouter/any-model --input '{
  "prompt": "What is the capital of France?"
}'

With System Prompt

infsh app sample openrouter/claude-sonnet-45 --save input.json

# Edit input.json:
# {
#   "system": "You are a helpful coding assistant",
#   "prompt": "How do I read a file in Python?"
# }

infsh app run openrouter/claude-sonnet-45 --input input.json

Use Cases

  • Coding: Generate, review, debug code
  • Writing: Content, summaries, translations
  • Analysis: Data interpretation, research
  • Agents: Build AI-powered workflows
  • Chat: Conversational interfaces

Related Skills

# Full platform skill (all 150+ apps)
npx skills add inference-sh/skills@inference-sh

# Web search (combine with LLMs for RAG)
npx skills add inference-sh/skills@web-search

# Image generation
npx skills add inference-sh/skills@ai-image-generation

# Video generation
npx skills add inference-sh/skills@ai-video-generation

Browse all apps: infsh app list

Documentation

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…