Install
openclaw skills install deep-research-surfConducts deep, multi-angle research using Surf MCP tools and parallel subagents. Use for deep research, competitive landscape analysis, strategic intelligenc...
openclaw skills install deep-research-surfYou are conducting deep, multi-angle research using the Surf MCP suite and parallel subagents. The goal is strategic intelligence with cross-source validation, evidence-rich findings, and orthogonal insights that single-pass searches miss.
Invocation pattern: /deep-research-surf [topic] or any of the trigger phrases in the description. The user may also pass an explicit [topic] argument; if absent, ask once before fanning out.
Always prefer Surf MCP tools (mcp__surf__*) over WebSearch or WebFetch, both for yourself and for every subagent you spawn. Surf primitives cover web search, web crawl, GitHub, Reddit, Twitter / X, Amazon, and YouTube subtitles. Read each tool's schema at invocation time for current parameters and capabilities.
--. Use - everywhere in output.Before spawning anything, internally commit to a one-paragraph plan covering:
Spawn three subagents in parallel via the Task tool (subagent_type=general-purpose, run_in_background=true). Each gets a distinct angle.
Mission: establish baseline; identify themes, key players, dominant narrative.
Searches: 2-3 broad queries via Surf web search.
Sample queries:
"[topic] overview 2025-2026""[topic] comprehensive guide""what is [topic] how it works"Returns: themes, key players, 2-3 areas requiring deep dives, source URLs.
Mission: cover technical, user, comparative, leadership, and critical angles with balanced context.
Searches: 6-8 queries across categories via Surf web search. Reach for Surf GitHub, Reddit, or Twitter primitives where the topic warrants.
Query categories (one or two queries each):
"[topic] architecture design patterns", "[topic] technical deep dive engineering""[topic] developer experience feedback", "[topic] user testimonials reviews""[topic] vs [alternatives] comparison benchmarks", "[topic] alternatives competitors""[topic] case studies customer success stories", "[topic] real-world use cases""[topic] founder interview CEO strategy", "[topic] company blog announcements""[topic] limitations problems challenges", "[topic] criticism drawbacks cons"Returns: per-category findings with URLs, quotes, data points.
Mission: contrarian views, lesser-known strategies, emerging signals.
Searches: 2-3 queries via Surf web search. Reddit and X are often where contrarian opinions live.
Sample queries:
"[topic] contrarian opinions different perspective""[topic] lesser-known strategies hidden tactics""[topic] emerging trends future directions 2026"Returns: orthogonal insights, contrarian views, emerging signals.
Each subagent returns structured findings under 1500 words containing: themes, evidence (quotes, data, dates), source URLs as clickable markdown links, unique angles. Wait for all three before Stage 2.
After Stage 1 returns, evaluate the aggregated findings. Identify the critical sources or angles that warrant full content extraction:
Decide N based on what Stage 1 surfaced. N is your judgment - could be 2, could be 6. Spawn N subagents in parallel via the Task tool, one per critical source or angle. Each uses the appropriate Surf detail primitive (web crawl, GitHub get, Reddit post, Twitter tweet, YouTube subtitles) for full-content extraction.
Each Stage 2 subagent returns: full extracted content, key quotes, data points, why this source matters.
Run the gap-detection checklist on the aggregated Stage 1 + Stage 2 corpus:
If a critical gap exists:
If no critical gap, proceed to Stage 4.
Analyze the full corpus:
Extract:
Output the synthesis using exactly this structure:
# [Research Topic Title]
## Executive Summary
2-3 paragraphs directly answering the core research question:
- What is the definitive answer?
- What are the 3-5 most important takeaways?
- What makes this topic significant or unique?
## Key Findings
Organize thematically (NOT source-by-source) with cross-source validation.
### 1. [Theme Name] - [One-line insight]
Evidence:
- [Specific finding from Source A with quote/data]
- [Corroborating evidence from Source B]
- [Unique angle from Source C]
Source citations: [Source A](url), [Source B](url)
### 2. [Theme Name] - [One-line insight]
[Repeat pattern]
### 3-6. [Additional themes...]
## Unique Insights (Orthogonal Findings)
Findings that appeared in only 1-2 sources but provide valuable perspective:
- **[Insight 1]**: [Description with citation]
- **[Insight 2]**: [Description with citation]
## Contradictions & Nuances
Where sources disagree or context matters:
- **[Topic]**: Source A claims X, but Source B argues Y because Z
- **[Topic]**: Common misconception is X, but evidence suggests Y
## Strategic Insights
Actionable recommendations based on synthesis.
### What to Replicate
- [Pattern with reasoning]
### What to Adapt (Not Copy Blindly)
- [Approach with context-specific considerations]
### Differentiation Opportunities
- [Angle: how to do this differently or better]
- [White space not addressed by existing solutions]
## Key Metrics to Track
Based on research, what should be measured:
1. [Metric - why it matters]
2. [Metric - why it matters]
## Sources
Organized by type with one-line annotation.
**Primary Sources:**
- [Source 1](url) - Why valuable: [one-line]
**Analytical Sources:**
- [Source 2](url) - Why valuable: [one-line]
**User / Community Sources:**
- [Source 3](url) - Why valuable: [one-line]
Quality checklist before delivering:
Throughout the run, ask yourself: