Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

IdleClaw

v1.2.1

Share your idle Ollama inference with the community, or use community inference when your API credits run out.

1· 372·2 current·2 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the required binaries (python3, ollama) and the included scripts: contribute.py registers local Ollama models and relays inference, consume.py posts prompts to the routing server, and status.py queries server health. There are no unrelated credentials, binaries, or config paths requested.
Instruction Scope
SKILL.md accurately describes network interactions. The code implements the described behaviors: WebSocket registration, forwarding JSON inference params to local Ollama, streaming JSON responses back, and client-side validation and limits. The scripts do not spawn shells, read arbitrary files, or access secrets beyond optional environment variables (IDLECLAW_SERVER, OLLAMA_HOST).
Install Mechanism
The repository includes an install.sh that runs pip install -r requirements.txt (packages: ollama, websockets, python-dotenv, httpx). This is a typical Python install flow and does not download arbitrary artifacts, but the registry metadata indicated 'no install spec' while files include an installer—this packaging inconsistency is worth noting. Installing Python packages will write to disk and add dependencies to your environment.
Credentials
No required secret env vars are declared. Optional env vars used by the code are IDLECLAW_SERVER and OLLAMA_HOST (both non-secret configuration). The skill does not request unrelated cloud credentials or tokens.
Persistence & Privilege
The skill is not always-enabled, is user-invocable, and does not modify other skills or system-wide agent settings. It does not persist user data to disk. It opens network connections to the routing server as expected for its function.
Assessment
This skill will make your local Ollama models available to an external routing server (by default https://api.idleclaw.com) and will forward chat prompts to community nodes when consuming. It does not request passwords or API keys, nor does it execute shell commands or read arbitrary files, but it does transmit the text of prompts and model outputs to an external service—treat that as potential data leakage. Before installing: (1) review and trust the routing server you will use (set IDLECLAW_SERVER to a self-hosted endpoint if you prefer), (2) inspect install.sh and the pip requirements and consider installing into a virtualenv, (3) run contributors in an isolated machine or VM if you are concerned about exposing prompt content, and (4) if you need stronger guarantees, host the routing server yourself and re-audit both server and client code. The repository/packaging inconsistency (registry claims no install spec while an installer and requirements exist) is a minor red flag—confirm the intended install steps before proceeding.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🦀 Clawdis
OSmacOS · Linux
Binspython3, ollama
latestvk97bmczv05j1f1wzkb3n16sf3582cz75
372downloads
1stars
7versions
Updated 5h ago
v1.2.1
MIT-0
macOS, Linux

IdleClaw

A distributed inference network for Ollama. Contributors share idle GPU/CPU capacity, consumers use community compute when their API credits run out.

Modes

Contribute — Share your idle inference

Start your machine as an inference node. Your local Ollama models become available to the community.

cd "$SKILL_DIR" && python scripts/contribute.py

This connects to the IdleClaw routing server, registers your available models, and begins accepting inference requests. Press Ctrl+C to stop.

Requirements: Ollama must be running with at least one model pulled.

Consume — Use community inference

Send a chat request to the community network instead of running locally.

cd "$SKILL_DIR" && python scripts/consume.py --model <model-name> --prompt "<your message>"

Streams the response to stdout as tokens arrive.

Status — Check network health

See how many nodes are online and what models are available.

cd "$SKILL_DIR" && python scripts/status.py

Configuration

VariableDefaultDescription
IDLECLAW_SERVERhttps://api.idleclaw.comRouting server URL
OLLAMA_HOSThttp://localhost:11434Local Ollama endpoint

Security

External Endpoints

This skill contacts the following external endpoints:

  1. IdleClaw Routing Server (IDLECLAW_SERVER, default https://api.idleclaw.com)

    • Contribute mode: Opens a WebSocket connection to register as an inference node. Sends: node ID, available model names, and inference responses. Receives: inference requests (model name, chat messages, and optional tool schemas).
    • Consume mode: Sends HTTP POST to /api/chat with model name and chat messages. Receives: streaming token response via SSE.
    • Status mode: Sends HTTP GET to /health and /api/models. Receives: server health info and available model list.
  2. Local Ollama (OLLAMA_HOST, default http://localhost:11434)

    • Contribute mode only: Calls Ollama's API to list models and run inference. All communication stays on localhost.

Data Handling

  • No user data is persisted locally or on the server beyond the active session.
  • No credentials or API keys are required or stored.
  • All communication is text — every message between the server, the node, and Ollama is JSON text over WebSocket or HTTP. No binary data, file uploads, images, or executable payloads are transmitted.
  • No local code execution — the contributor node is a relay. It forwards JSON inference parameters to Ollama and streams JSON responses back to the server. The node does not execute tools, run shell commands, or access the filesystem. Any tool execution is handled server-side after response validation.
  • Chat messages (text strings) are transmitted from consumer to server to contributor node for inference, then discarded.
  • No telemetry or analytics are collected.
  • In contribute mode, the routing server sends JSON inference requests to the node, which forwards them to your local Ollama instance. Ollama returns a JSON text response which the node relays back. Contributors can point IDLECLAW_SERVER to a self-hosted instance.
  • In consume mode, text prompts are sent to the routing server which routes them to an available contributor node.

Sanitization

Client-side:

  • Inference parameters are validated before passing to Ollama: only whitelisted keys are forwarded (model, messages, stream, think, keep_alive, options, tools, format). Unknown keys are stripped.
  • Requested model must match a model the node registered — requests for unregistered models are rejected.
  • Message limits enforced: max 50 messages per request, max 10,000 characters per message content.
  • Only known response fields are forwarded back to the server (role, content, thinking, tool_calls).
  • In consume mode, model names are validated against a strict pattern (alphanumeric, colons, periods, hyphens only). In contribute mode, requested models must match a model the node registered from Ollama.
  • Server URLs are validated as HTTP/HTTPS URLs before use.
  • No shell commands are constructed from user input — all execution is Python-only.
  • No local files are read or accessed — the skill only communicates with Ollama and the routing server.

Server-side (routing server):

  • IP-based rate limiting on all endpoints: chat (20 RPM), node registration (5 RPM), general (60 RPM).
  • Input validation: max 50 messages per request, 10,000 chars per message, 64-char model names, roles restricted to user and assistant.
  • Output sanitization: response content is stripped of markup tags before delivery to consumers.
  • Node registration limits: max 3 nodes per IP, max concurrent requests clamped to 1-10.
  • Tool execution safeguards: schema validation, argument type checking, 15-second timeout, per-node rate limiting (20 calls/min).
  • Server binds to localhost only, accessed through Caddy reverse proxy with auto-TLS.
  • Red team tested with documented findings and mitigations (security assessment on GitHub).

Installation

Run the installer to set up Python dependencies:

cd "$SKILL_DIR" && bash install.sh

Comments

Loading comments...