Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Openclaw Research Tool

v0.1.5

Search the web using LLMs via OpenRouter. Use for current web data, API docs, market research, news, fact-checking, or any question that benefits from live i...

0· 732·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for aaronn/openclaw-search-tool.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Openclaw Research Tool" (aaronn/openclaw-search-tool) from ClawHub.
Skill page: https://clawhub.ai/aaronn/openclaw-search-tool
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: OPENROUTER_API_KEY
Required binaries: research-tool
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install aaronn/openclaw-search-tool

ClawHub CLI

Package manager switcher

npx clawhub@latest install openclaw-search-tool
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (web research via OpenRouter) align with required items: the skill needs the research-tool CLI binary and an OPENROUTER_API_KEY. The README/metadata references a GitHub repo, which is consistent with a third-party CLI wrapper.
Instruction Scope
SKILL.md contains only instructions to run the research-tool CLI (sessions_spawn/exec) and how to set flags; it does not instruct reading unrelated local files or secrets. It does reference optional env vars (RESEARCH_EFFORT, RESEARCH_MODEL) that are not listed in requires.env — these are configuration knobs, not extra secrets. It also warns not to set exec timeout (operational guidance) which is unusual but not a security issue by itself.
Install Mechanism
The skill is instruction-only (no install spec). SKILL.md suggests installing via `cargo install openclaw-search-tool`, which is a reasonable, traceable install mechanism for a Rust-based CLI. The registry did not include an automated install step; that's low risk but means users should verify the binary source themselves before installing.
Credentials
Only OPENROUTER_API_KEY is required (declared as primary credential), which is proportional to a tool that talks to OpenRouter. The docs mention additional optional env vars (RESEARCH_MODEL, RESEARCH_EFFORT) not declared in requires.env — these are non-sensitive configuration items. Users should remember the API key grants access/billing on OpenRouter and therefore is sensitive.
Persistence & Privilege
The skill does not request always:true or any system-wide config paths and does not attempt to persist beyond normal CLI usage. disable-model-invocation is false (normal) — the skill can be invoked by the agent but has no elevated installation privileges.
Assessment
This skill appears to do what it says: run a CLI that queries OpenRouter and returns citation-backed answers. Before installing: (1) verify the origin of the research-tool binary (inspect the GitHub repo or crate) because the CLI will send your queries to OpenRouter; (2) treat OPENROUTER_API_KEY as a secret (it enables access and billing on your OpenRouter account); (3) avoid sending sensitive personal or proprietary data through the tool unless you trust OpenRouter's handling and your account settings; (4) follow the author's recommendation to run it in a sub-agent so your main session doesn't block; and (5) if you want stronger assurance, review the CLI source code or build from source rather than running a prebuilt binary.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔍 Clawdis
Binsresearch-tool
EnvOPENROUTER_API_KEY
Primary envOPENROUTER_API_KEY
latestvk97d9dbqf9zeqcxzxttzpqxc9181b4x4
732downloads
0stars
5versions
Updated 3h ago
v0.1.5
MIT-0

OpenClaw Research Tool

Web search for OpenClaw agents, powered by OpenRouter. Ask questions in natural language, get accurate answers with cited sources. Defaults to GPT-5.2 which excels at documentation lookups and citation-heavy research.

Note: Even low-effort queries may take 1 minute or more to complete. High/xhigh reasoning can take 10+ minutes depending on complexity. This is normal — the model is searching the web, reading pages, and synthesizing an answer.

Recommended: Run research-tool in a sub-agent so your main session stays responsive:

sessions_spawn task:"research-tool 'your query here'"

⚠️ Never set a timeout on exec when running research-tool. Queries routinely take 1-10+ minutes. Use yieldMs to background it, then poll — but do NOT set timeout or the process will be killed mid-search.

The :online model suffix gives any model live web access — it searches the web, reads pages, cites URLs, and synthesizes an answer.

Install

cargo install openclaw-search-tool

Requires OPENROUTER_API_KEY env var. Get a key at https://openrouter.ai/keys

Quick start

research-tool "What are the x.com API rate limits?"
research-tool "How do I set reasoning effort parameters on OpenRouter?"

From an OpenClaw agent

# Best: run in a sub-agent (main session stays responsive)
sessions_spawn task:"research-tool 'your query here'"

# Or via exec — NEVER set timeout, use yieldMs to background:
exec command:"research-tool 'your query'" yieldMs:5000
# then poll the session until complete

Flags

--effort, -e (default: low)

Controls how much the model reasons before answering. Higher effort means better analysis but slower and more tokens.

research-tool --effort low "What year was Rust 1.0 released?"
research-tool --effort medium "Explain how OpenRouter routes requests to different model providers"
research-tool --effort high "Compare tradeoffs between Opus 4.6 and gpt-5.3-codex for programming"
research-tool --effort xhigh "Deep analysis of React Server Components vs traditional SSR approaches"
LevelSpeedWhen to use
low~1-3 minQuick fact lookups, simple questions
medium~2-5 minStandard research, moderate analysis
high~3-10 minDeep analysis with careful reasoning
xhigh~5-20+ minMaximum reasoning, complex multi-source synthesis

Can also be set via env var RESEARCH_EFFORT.

--model, -m (default: openai/gpt-5.2:online)

Which model to use. Defaults to GPT-5.2 with the :online suffix because it excels at questions where citations and accurate documentation lookups matter. The :online suffix enables live web search and works with any model on OpenRouter.

# Default: GPT-5.2 with web search (great for docs and cited answers)
research-tool "current weather in San Francisco"

# Claude with web search
research-tool -m "anthropic/claude-sonnet-4-20250514:online" "Summarize recent changes to the OpenAI API"

# GPT-5.2 without web search (training data only)
research-tool -m "openai/gpt-5.2" "Explain the React Server Components architecture"

# Any OpenRouter model
research-tool -m "google/gemini-2.5-pro:online" "Compare React vs Svelte in 2026"

Can also be set via env var RESEARCH_MODEL.

--system, -s

Override the system prompt to give the model a specific persona or instructions.

research-tool -s "You are a senior infrastructure engineer" "Best practices for zero-downtime Kubernetes deployments"
research-tool -s "You are a Rust systems programmer" "Best async patterns for WebSocket servers"

--stdin

Read the query from stdin. Useful for long or multiline queries.

echo "Explain the OpenRouter model routing architecture" | research-tool --stdin
cat detailed-prompt.txt | research-tool --stdin

--max-tokens (default: 12800)

Maximum tokens in the response.

--timeout (optional, no default)

No timeout by default — queries run until the model finishes. Set this only if you need a hard upper bound (e.g. --timeout 300).

Output format

  • stdout: Response text only (markdown with citations) — pipe-friendly
  • stderr: Progress status, reasoning traces, and token usage
🔍 Researching with openai/gpt-5.2:online (effort: high)...
✅ Connected — waiting for response...

[response text on stdout]

📊 Tokens: 4470 prompt + 184 completion = 4654 total | ⏱ 5s

Status indicators

  • 🔍 Researching... — request sent to OpenRouter
  • ✅ Connected — waiting for response... — server accepted the request, model is searching/thinking
  • ⏳ 15s... ⏳ 30s... — elapsed time ticks (only in interactive terminals, not in agent exec)
  • ❌ Connection to OpenRouter failed — couldn't reach OpenRouter (network issue)
  • ❌ Connection to OpenRouter lost — connection dropped while waiting. Retry?

Tips for better results

  • Write in natural language. "What are the best practices for Rust error handling and when should you use anyhow vs thiserror?" works better than keyword-style queries.
  • Provide maximum context. The model starts from zero. Include background, what you already know, and all related sub-questions. Detailed prompts massively outperform vague ones.
  • Use effort levels appropriately. low for quick facts, high for real research, xhigh only for complex multi-source analysis.
  • Use -s for domain expertise. A specific persona produces noticeably better domain-specific answers.

Cost

~$0.01–0.05 per query. Token usage is printed to stderr after each query.

Comments

Loading comments...