Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OpenClaw Dual Agent

v1.1.1

Run two OpenClaw agents simultaneously — a paid Anthropic agent and a free agent using either OpenRouter or local Ollama models. Trigger phrases: multi-agent...

0· 188·0 current·0 all-time
byDeonte Cooper@djc00p

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for djc00p/openclaw-dual-agent.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "OpenClaw Dual Agent" (djc00p/openclaw-dual-agent) from ClawHub.
Skill page: https://clawhub.ai/djc00p/openclaw-dual-agent
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: ANTHROPIC_API_KEY
Required binaries: jq
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install openclaw-dual-agent

ClawHub CLI

Package manager switcher

npx clawhub@latest install openclaw-dual-agent
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description match the runtime instructions: it shows how to run an Anthropic (paid) agent alongside a free OpenRouter or local Ollama agent, configure openclaw.json, Telegram bindings, and auth files. The requested artifacts (Anthropic key, OpenRouter key, Telegram bot tokens, Ollama if chosen) are appropriate for the stated purpose.
Instruction Scope
SKILL.md is purely operational guidance: creating Telegram bots, running openclaw commands, placing auth-profiles.json, editing openclaw.json, pulling Ollama models, and starting ollama serve. It does not instruct the agent to read unrelated system secrets or contact unknown endpoints beyond documented services (openrouter.ai, api.telegram.org, ollama).
Install Mechanism
This is instruction-only (no install spec, no code files). The only external install guidance is to install Ollama via Homebrew or the vendor site, which is standard and low-risk for an instruction-only skill.
!
Credentials
There is a mismatch between registry metadata (which lists no required env vars) and the SKILL.md content/embedded metadata that references ANT HROPIC_API_KEY and requires OpenRouter API keys and Telegram bot tokens/auth-profiles.json. Requesting Anthropic/OpenRouter keys and Telegram tokens is reasonable for the feature, but the packaging omission (registry not declaring these expectations) is an incoherence that could lead to surprising credential prompts or misconfiguration. Also the instructions show examples that could lead users to store keys in files under ~/.openclaw — SKILL.md warns about file permissions but the guidance should explicitly state secure storage practices.
Persistence & Privilege
The skill does not request 'always: true' or system-wide changes. It instructs user-scoped configuration under ~/.openclaw and creating agent-specific auth-profiles.json; that is expected for OpenClaw multi-agent setup and stays within the agent's domain.
What to consider before installing
This skill is mostly coherent for its stated purpose, but there are a few things to check before installing: - Expect to provide credentials: you will need your Anthropic API key for the paid agent, and (only for the cloud free-agent option) an OpenRouter API key placed in auth-profiles.json in the agentDir. You will also need Telegram bot tokens for each bot. The registry metadata did not declare these, so don't be surprised when the setup asks for them. - Keep secrets safe: follow the SKILL.md advice — use interactive onboarding where possible, set auth files to chmod 600, and avoid passing API keys on the CLI or storing them in world-readable files. Consider using a secrets manager instead of plaintext files. - Verify endpoints and packages: the instructions reference https://openrouter.ai and https://ollama.ai (and Homebrew). Confirm these providers are expected by you and that model IDs referenced are legitimate. - Local option is privacy-friendly: choosing the Ollama/local route avoids sending data to cloud providers after models are pulled, but it requires large downloads and running 'ollama serve'. - Packaging mismatch: the registry metadata missing required env vars is a packaging issue (sloppy but not necessarily malicious). If you need higher assurance, request the publisher clarify the required environment variables and confirm exactly what files the skill will read/write under ~/.openclaw. If you plan to proceed, review and back up any existing ~/.openclaw configuration, create per-agent auth files with restrictive permissions, and test the setup in a controlled environment before connecting production accounts.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🤖 Clawdis
OSmacOS · Linux · Windows
Binsjq
Environment variables
ANTHROPIC_API_KEYrequired
latestvk9730x8hk4j6pabhkxtcqgdr5584pytx
188downloads
0stars
6versions
Updated 2w ago
v1.1.1
MIT-0
macOS, Linux, Windows

Multi-Agent OpenClaw Setup

Run a paid Anthropic agent and free OpenRouter agent side by side with separate Telegram bots.

Quick Start

  1. Create two Telegram bots via @BotFather and extract chat IDs:

    curl https://api.telegram.org/bot{TOKEN}/getUpdates | jq '.result[0].message.chat.id'
    
  2. Authenticate agents:

    # Run interactively — avoids exposing keys in shell history
    openclaw onboard
    

    ⚠️ Never pass API keys directly on the CLI (e.g. --anthropic-api-key ...) — it exposes them in shell history. Always use openclaw onboard interactively. Credential files (auth-profiles.json, openclaw.json) should be chmod 600.

  3. Configure openclaw.json with two agents, separate bindings, and Telegram accounts.

  4. Verify setup:

    openclaw doctor
    openclaw sessions cleanup \
      --store /Users/YOUR_USERNAME/.openclaw/agents/main/store \
      --enforce --fix-missing
    openclaw restart
    

Key Concepts

  • Agent isolation: Each agent has its own agentDir, workspace, and model config.
  • Binding routing: accountId in bindings directs Telegram messages to the correct agent.
  • Model refs: Use provider/modelid format (e.g., anthropic/claude-sonnet-4-6).
  • Per-agent auth: OpenRouter requires auth-profiles.json in each agent's directory.

Common Usage

Adding a free agent:

  • Create agentDir at /Users/YOUR_USERNAME/.openclaw/agents/free-agent/agent
  • Add agent entry to openclaw.json with model.primary: "openrouter/..."
  • Create auth-profiles.json with OpenRouter API key in agent's directory
  • Add binding with unique accountId (e.g., "tg2")
  • Restart: openclaw restart

Switching models: Edit openclaw.json agent's model.primary and fallbacks with valid provider/id strings.

Masking secrets for logs:

cat ~/.openclaw/openclaw.json | \
  jq '.channels.telegram.accounts |= map_values(.botToken = "[REDACTED]")'

Option B: Local Ollama Agent (Free + Private)

Instead of OpenRouter, run your second agent on a local Ollama model — completely free, fully private, and offline-capable.

Install & Configure Ollama

macOS:

# Install via Homebrew
brew install ollama

# Or download from https://ollama.ai

Start Ollama:

# In a dedicated terminal, keep it running
ollama serve

Pull a model (choose one based on your needs):

# Google Gemma 4 26B — good balance of capability and speed (17GB)
ollama pull gemma4:26b

# Meta Llama 3.3 70B — very capable, excellent reasoning (43GB)
ollama pull llama3.3:70b

# Qwen 2.5 32B — strong coding and multilingual (20GB)
ollama pull qwen2.5:32b

# Mistral 7B — fast and lightweight, good for quick responses (4GB)
ollama pull mistral:7b

Configure OpenClaw with Ollama Agent

Add the agent entry to openclaw.json (e.g., id: "ayo"):

{
  "id": "ayo",
  "name": "Ayo",
  "workspace": "/Users/YOUR_USERNAME/.openclaw/workspace-ayo",
  "agentDir": "/Users/YOUR_USERNAME/.openclaw/agents/ayo/agent",
  "model": {
    "primary": "ollama/gemma4:26b",
    "fallbacks": [
      "openrouter/free"
    ]
  },
  "heartbeat": {
    "every": "1h",
    "model": "openrouter/free"
  }
}

Key points:

  • Model format: Always use ollama/modelname:tag (e.g., ollama/gemma4:26b, ollama/llama3.3:70b)
  • No API key needed: Ollama runs entirely locally. No auth-profiles.json required.
  • Ollama must be running: Start ollama serve in a terminal before the gateway starts
  • Pull first: Run ollama pull modelname:tag before configuring (the model must exist locally)
  • Heartbeat fallback: The example uses openrouter/free as a fallback since Ollama models may be slower for heartbeats. You can also use the same Ollama model (ollama/gemma4:26b) if you prefer fully local operation
  • Add Telegram binding: Include a separate binding with a unique accountId (e.g., "tg_ollama") to route messages to Ayo

After config change:

# Verify no errors
openclaw doctor

# Restart the gateway
openclaw gateway restart

Common Gotchas

❌ Wrong✅ CorrectIssue
gemma4:26b:localollama/gemma4:26bInvalid format; always use provider/model:tag
gemma4:26bollama/gemma4:26bWithout prefix, OpenClaw won't route to Ollama
ollama/kimi-k2.5:cloudopenrouter/kimi-k2.5:cloudCloud models don't belong in Ollama fallbacks
Model not pulledollama pull gemma4:26bGateway fails silently if model doesn't exist locally

If you see "Invalid input" errors in openclaw doctor, check the model.primary format — it must start with ollama/.

References

  • references/config-reference.md — Full openclaw.json, bindings, and auth-profiles.json examples
  • references/troubleshooting.md — Common errors, fixes, and Node.js compatibility notes

Comments

Loading comments...